Home / News / How AI understands and interprets visual media

How AI understands and interprets visual media

95
How-AI-understands-and-interprets-visual-media-1

Artificial intelligence, or AI, is an extensive term encompassing several aspects of our world. From helping us with medical science advancements to adding new features to even the most affordable Android smartphones, no definition can sum up precisely what AI is. That’s because AI has many uses and real-world applications across various industries.


A computer vision model works in the background and is often responsible for unique image or AI features regarding image-based applications. In this guide, we discuss what a computer vision model is and the three types of vision models used.


What does a computer vision model do?

Much like ChatGPT, the popular AI-powered chatbot that changed the world forever with its large language model (LLM) technology, a computer vision model is the next evolution beyond text-based applications. Using what’s known as a large vision model (LVM), a computer can interpret and identify images and visuals from the real-world environment.

When adequately trained using a specific neural network dataset, AI applications can visualize, identify, and classify objects in the real world. A neural network operates like the human brain but relies on software-based nodes for the neurons.

Source: V7 Labs

In humans, a neuron is an electrical signal that sends information to and from the brain. The software-based nodes in a neural network use onboard computational power while training with a specific dataset. This is the core of deep learning technologies, allowing AI applications the power and features to go beyond what we thought was physically possible.

Since a neural network is like the human brain and an LVM is designed to mimic the human eye, combining these technologies allows AI-powered applications to aid us with the visual aspect of our world rather than just text.

Different forms of vision models

Now that you know what a vision model is, there are a few forms that you should be aware of. The three primary vision model forms you’ll come across are convolutional neural networks (CNNs), machine learning, and feature-based. Each has a specific purpose and applications that use it. The sections below discuss what they do and why they are essential to how many vision-based AI applications work.

Convolutional neural networks

CNNs are deep learning models and are very good at processing and identifying images or objects in the visual space. They are also autonomous, allowing them to learn from datasets without human intervention. CNNs are made of four layers: convolutional, pooling, hidden, and output. Each layer serves a specific purpose, relying on various algorithms. With those separate layers working together, a CNN can understand and identify complex data in an efficient and organized way.

Convolutional is the first step in training an AI application that relies on a computer vision model. In this phase, an image is scanned into the neural network so that the computer can precisely understand what it sees, down to each pixel. This allows it to detect and identify shapes, patterns, and textures. From there, it is passed to the pooling layer, taking the large dataset and condensing it to a reasonable size. It removes irrelevant or unneeded data while maintaining the most relevant information it learned from the convolutional step.

An example chart of what a CNN vision model see as the input and output data

Source: IBM

It goes to the hidden layer next, which stacks and collects the data from the previous two layers. This is where the basic features of the final image results start to form, with more details added as it receives additional complex data from each pass.

The last step is the output layer, which takes everything from the previous layers and puts it together. For example, in a computer vision model for image classification, the final result might put the data into specific neurons based on the categories it received from the original dataset.

Machine learning

Similarly to CNNs, machine learning is another popular neural network method of training an AI application regarding computer vision models. Machine learning shares some ideas with CNNs, but they differ in what they are designed for.

Machine learning trains a neural network with predefined datasets or algorithms, allowing it to identify unknown patterns. This allows it to predict future results or information about the data it gained by running intensive methods of repetition. Machine learning works well with image detection features and other image-related purposes, depending on the application it’s used for.

A diagram of a neural network with an 'Input Layer' on the left, several 'Hidden Layers' in the middle, and an 'Output Layer' on the right, showing interconnected nodes and the data flow from input to output.

Source: “Machine learning methods for wind turbine condition monitoring: A review” by Stetco et al.

It can be used for image classification but is designed to be a universal solution for almost any industry or application. Machine learning allows for a range of datasets or algorithms. CNNs are designed for image-based processing. Using the four layers of a CNN means the results are detailed and fine-tuned to the data required from the original image. Regarding computer vision models, CNNs are a popular choice over machine learning for complex image-based datasets since they are designed for that purpose.

Feature-based

Compared to their popular CNN vision model counterpart, feature-based models take a different approach to how they work. Rather than scanning and identifying each pixel of an image, feature-based models look for larger, more specific details, aspects, or unique features. This includes edge detection of an object, lines, and shapes or textures within an image. Similar to CNNs, feature-based models require multiple steps to process the image data.

The first step in a feature-based model is the feature detection stage, which uses the original image to find points of interest. It relies on vision algorithms to detect, highlight, and characterize the features found within that image. For example, the scale-invariant feature transform (SIFT) algorithm locates details regardless of size or rotation while being matched accurately with different images. At the same time, the speeded-up robust features (SURF) algorithm is another popular take on SIFT. It has similar features but is faster at processing data at the slight cost of accuracy.

An example of a butterfly image being processed <a href=by the SURF vision algorithm detecting the while blobs on its wings ” style=”display:block;height:auto;max-width:100%” src=”https://static1.anpoimages.com/wordpress/wp-content/uploads/2024/01/surf-vision-model-algorithm-example.jpg”/>

Source: OpenCV

SIFT is slower but better suited for tasks requiring more image details. SURF excels at balancing speed and accuracy. Both are popular choices. When the image data is processed, it takes that information and creates keypoint descriptors using another separate algorithm. This highlights and tags the unique features discovered in previous steps.

The final step involves matching and pairing the results with other images based on the algorithms used before this. For example, the Hough transform algorithm accurately matches shapes against other images, even if the original data is noisy.

Feature-based vision models are fast, lighter on computational power resources, and work well for less demanding tasks since they don’t look at each pixel of an image. A CNN model is better for tasks that require precise details, have an immense dataset size scale, or demand complex computations. CNN models rely on deep learning, which is a step above what is physically possible with feature-based ones. Because of this, many in the vision model industry are turning to CNN models to power their applications, especially those designed for general consumer use.

Real-world vision model applications

The sections below give popular examples of applications, products, or services that use vision models in the real-world market. You may be using vision model features daily without realizing it.

Many of these examples use deep learning technologies with CNN-based vision models for complex image tasks. Most of them have some elements of machine learning and feature-based models for less demanding tasks or features. Vision models do not always correlate with AI use or features but are often used in AI-based applications.

Google Photos

One of Google’s most well-known examples of vision models consumers use is Google Photos. The app relies almost exclusively on vision models, from object and scene recognition to tagging and matching faces with other photos in your library. It also uses vision models to extract text from any image, suggests photo enhancements for poor lighting conditions, and creates collages automatically using similar images or faces. Due to its complexity, Google Photos uses deep learning technologies and often relies on CNN-based vision models.

Google Photos logo overlayed on polaroid pictures hero image

Source: Unsplash / Wikimedia Commons

Google Lens

Much like Google Photos, Google Lens relies on CNN-based vision models to bring its unique features to life. Google Lens uses vision models to offer an image-based search engine experience and a digital assistant in one product. Point your camera at something, such as a sign in a different language, and it layers the translated text on top of it. Google Lens identifies landmarks, plant types, and birds you might encounter outside. You can also scan and copy text from a document or find similar products, such as shoes or furniture.

The Google Search page shown in dark mode.

Source: Google

Google Earth

Another popular Google product is Google Earth, which requires complex operations and CNN-based vision models. Since deep learning is involved, these CNN models are trained using massive satellite and aerial image data. Used for image classification and depth estimations, these are only a few things that allow Google Earth to simulate a virtual model of Earth with accuracy and detail.

Seamlessly stitching together images on a large scale is a challenging task. CNN-based vision models excel in this environment since they are designed to handle demanding and complex features.

Map style options shown next to satellite view of earth.

Source: Google Earth Pro web app

Self-driving cars

Self-driving or driverless cars need significant onboard computational power and various vision models to be truly autonomous. They also require multiple cameras and sensors to see and understand the world around them while navigating the roadway. This example requires complete synchronicity between the three vision model types: CNNs, machine learning, and feature-based.

CNNs are used for complex tasks like object and lane detection, while machine learning handles massive datasets. They are trained and updated frequently. Feature-based models may use the SIFT algorithm to help the camera system match road features under various lighting conditions.

The image shows a cyan wireframe holographic model of a car on a dark grid background, with a glowing 'AI' symbol on the left and partially visible text suggesting 'self-driving' technologies on the right.

Source: SenseTime

Face ID

Dating back to the iPhone X, Apple introduced Face ID technology to help us secure our iOS devices. It requires multiple sensors and technology working together to unlock your device. Still used today on the latest iPhones, Face ID improves with each new version. It appears to use onboard CNN-based vision models to extract unique facial features from the IR camera. Your on-device facial scan is compared with a depth map before unlocking or denying the request. Machine learning is also used to continually learn your facial features and lighting conditions with each unlock.

Dynamic island not expanded on the iPhone 15 Pro Max

Vision models help make the magic happen

A computer vision model is likely behind many of your favorite apps that offer unique image or vision-based features. Depending on the feature or task, it uses at least one of the three vision model types: CNN-based, machine learning, or feature-based. CNN vision models are commonly used because they rely on deep learning technologies, allowing for greater scalability. Machine learning and feature-based vision models are used for less demanding purposes. In many cases, all three vision model types work together in some way to create a unified experience.

If you’re looking for the best AI apps and services for your Android device, we have you covered. We show a few excellent examples to help you start your AI journey, including some of the top AI-powered chatbots.

No Comments

Comment on
There are no comments yet, but you can be the one to add the very first comment!

Comments