What is YOLOv8?
YOLOv8 is the latest family of YOLO-based object detection models from Ultralytics that offers cutting-edge performance.
use of previous versions of YOLO,YOLOv8 model is faster and more accuratewhile also providing a coherent framework for performance dummies
- object detection,
- instance segmentation and
- image rating.
As of this writing, there are still many features to be added to the Ultralytics YOLOv8 repository. This includes the full set of export functions for the trained models. Additionally, Ultralytics will publish an article on Arxiv comparing YOLOv8 to other high-end vision models.
- What is YOLOv8?
- What's new in YOLOv8?
- Templates available in YOLOv8
- How to use YOLOv8?
- YOLOv8 in front of YOLOv5
- Diploma
What's new in YOLOv8?
ultralyticreleased a whole new repository for YOLO templates. It is built asUnified framework for training object detection, instance targeting, and image classification models.
These are some of the main features of the new version:
- Friendly API (command line + Python).
- Faster and more accurate.
- supports
- object detection,
- instance segmentation,
- image rating.
- Expandable to all previous versions.
- New backbone.
- New head without anchor.
- New loss function.
YOLOv8 is also highly efficient and flexible, supporting multiple export formats, and the model can run on both CPU and GPU.
Templates available in YOLOv8
There are five models in each category of YOLOv8 models for detection, segmentation, and classification. YOLOv8 Nano is the fastest and smallest while YOLOv8 Extra Large (YOLOv8x) is the most accurate but also the slowest among them.
YOLov8n | YOLov8s | YOLov8m | YOLOv8l | YOLOv8x |
YOLOv8 comes with the following pretrained models:
- Object detection checkpoints trained on the COCO detection dataset with an image resolution of 640.
- Instance segmentation checkpoints trained on the COCO segmentation dataset with an image resolution of 640.
- Pretrained image classification models on the ImageNet dataset with an image resolution of 224.
Let's take a look at the output with YOLOv8x instance targeting and detection models.
How to use YOLOv8?
In order to use the full potential of YOLOv8, the repository requirements, as well as theultralytic
Package.
To install the requirements, we first need to clone the repository.
Clone of Git https://github.com/ultralytics/ultralytics.git
Then install the requirements.
pip install -r requirements.txt
With the latest version, Ultralytics YOLOv8 offers a full versionCommand Line Interface (CLI) API and Python SDKto perform training, validation and inference.
use theyolo
CLI, we need to installultralytic
Package.
Instalar pip ultralytics
How to use YOLOv8 via command line interface (CLI)?
After installing the necessary packages, we can access it through the YOLOv8 CLI.yolo
Domain. The following is an example of how to perform an object detection inference usingyolo
CLI.
yolo task=detectar \mode=predict \model=yolov8n.pt \source="image.jpg"
OTask
flag can accept three arguments:recognize
,classify
, miSegment
. Also, the mode can beZug
,Wert
, opredict
. We can also pass the fashion asExport
when exporting a trained model.
The following image shows all the possibleyolo
CLI flags and arguments.
How to use YOLOv8 with the Python API?
We can also create a simple Python file, import the YOLO module, and run the task of our choice.
from ultralytics import YOLOmodel = YOLO("yolov8n.pt") # load a YOLOv8n modelmodel.train(data="coco128.yaml") # train modelmodel.val() # evaluate the performance of the model during validation setmodel.predict( source = "https ://ultralytics.com/images/bus.jpg") # preview in an imagemodel.export(format="onnx") # export the model to ONNX format
For example, the above code first trains the YOLOv8 Nano model on the COCO128 dataset, evaluates it against the validation set, and runs a prediction on a sample image.
come on use themyolo
CLI and make inferences using object detection, instance segmentation, and image classification models.
download codeTo easily follow this tutorial, download the code by clicking the button below. It's free!
download code
Inference results for object detection
The following command performs detection on a screen with the YOLOv8 Nano model.
yolo task=detect mode=predict model=yolov8n.pt source='input/video_3.mp4' show=True
the end is almost here105 FPS on a laptop GTX 1060 GPU. And we get the following output.
The YOLOv8 Nano model mistakes cats for dogs in some frames. Let's run the detection on the same video using the YOLOv8 Extra Large model and check the outputs.
yolo task=detect mode=predict model=yolov8x.pt source='input/video_3.mp4' show=True
The Extra Large model runs at an average of 17 FPS on the GTX 1060 GPU.
Although the misclassifications are slightly lower this time, the model still misidentifies the bank in a few frames.
Inference results for instance segmentation
Running the inference with the YOLOv8 instance segmentation model is just as easy. We just have to change them.Task
It's inModel
name in the above command.
yolo task=segment mode=prediction model=yolov8x-sec.pt source='input/video_3.mp4' show=True
Since instance targeting is combined with object detection, the average FPS this time around was around 13.
The segmentation maps appear reasonably clear in the output. Even if the cat hides under the block in the last few frames, the model can recognize it and target it.
Inference results for image classification
Finally, since YOLOv8 provides pre-trained classification models, let's run classification inference on the same video with theyolov8x-cls
Model. This is the largest classification model provided by the repository.
yolo task=clasificar modo=predecir modelo=yolov8x-cls.pt source='input/video_3.mp4' show=True
By default, the video is annotated with the top 5 classes provided by the model. Without post-processing, the annotations correspond directly to the ImageNet class names.
YOLOv8 vs YOLOv7 vs YOLOv6 vs YOLOv5
Right off the bat, the YOLOv8 models seem to perform much better compared to previous YOLO models. Not only YOLOv5 models, YOLOv8 is also ahead of YOLOv7 and YOLOv6 models.
Compared to other YOLO models trained with an image resolution of 640, all YOLOv8 models perform better with a similar number of parameters.
Now, let's take a detailed look at how the latest YOLOv8 models stack up against Ultralytics' YOLOv5 models. The following tables show a complete comparison between YOLOv8 and YOLOv5.
overall comparison
Object Detection Comparison
Instance targeting comparison
Image Classification Comparison
Of course, the latest YOLOv8 models are much better compared to YOLOv5, except for one of the ranking models.
Development of the YOLOv8 object detection model
Here is an image showing the timeline of YOLO object detection models and the development of YOLOv8.
YOLOv1
The first version of YOLO object detection, YOLOv1, was developed by Joseph Redmon et al. Published. in 2015. It was the first single-stage object detection (SSD) model, giving rise to SSDs and all subsequent YOLO models.
YOLO 9000 (v2)
YOLOv2, also known as YOLO 9000, was published by the original author of YOLOv1, Joseph Redmon. He improved YOLOv1 by introducing the concept of anchor frames and a better backbone, i.e. Darknet-19.
YOLOv3
In 2018, Joseph Redmon and Ali Farhadi released YOLOv3. It was less of an architectural leap and more of a white paper, but still a huge improvement on the YOLO family. YOLOv3 uses the Darknet-53 backbone, remaining connections, better pretraining techniques, and image enhancement to achieve improvements.
Ultralytics YOLO Object Recognition Models
All YOLO object detection models up to YOLOv3 were written with the C programming language and used the Darknet framework. Newcomers find it difficult to review the code base and modify the models.
Around the same time as YOLOv3, Ultralytics released the first YOLO (YOLOv3) implemented with the PyTorch framework. It was also much more affordable and easy to use for transfer learning.
Shortly after the release of YOLOv3, Joseph Redmon left the machine vision research community. YOLOv4 (from Alexey et al.) was the last YOLO model written on the dark web. After that, there were many YOLO object detections. Scaled YOLOv4, YOLOX, PP-YOLO, YOLOv6 and YOLOv7 are some of the most famous among them.
After YOLOv3, Ultralytics also released YOLOv5, which was even better, faster and easier to use than all other YOLO models.
Effective immediately (January 2023), Ultralytics will release YOLOv8 under theultralyticRepository that is perhaps the best YOLO model to date.
— Satya Mallick (@LearnOpenCV)January 11, 2023🔥YOLOv8 is finally here!
Watch our video showing the results of object detection and instance orientation prediction with the Ultralytics YOLOv8x model.➡️https://t.co/OHjVyBbUpO #Yolo #yolov8 #yolov5 # object detection #deep learning #ai #computer vision pic.twitter.com/mjoXrJrbx3
Diploma
In this article, we have discussed the latest version of YOLO templates i.e. H.YOLOv8. We cover the new models, their performance, and the command line interface that comes with the package. We also draw conclusions from the videos.
In future posts, we will also optimize YOLOv8 models on a custom dataset.
Let us know in the comments section if you do your own experiments.
In case you missed it, here is the full list of posts in our YOLO series:
- YOLOR Article Explanation and Comparison
- YOLOv6 Underwater Debris Detection Personalized Training
- YOLOv6 Object Detector Paper Explanation and Conclusion
- YOLOX object detector and custom training from drone data sets
- YOLOv7 Object Detector Training for Custom Datasets
- Explanation and conclusion of the YOLOv7 object detector document
- YOLOv5 Vehicle Dataset Custom Object Detector Training
- YOLOv5 Object Detection with OpenCV DNN
- YOLOv4 - Training a custom hole detector