In the realm of Computer Vision, a shift is underway. This article explores the transformative power of Foundation Models, digging into their role in reshaping the entire Computer Vision pipeline.
It also demystifies the hype behind the idea that stitching together a few models will solve Computer Vision: data is and remains king! Training and labelling might slowly be replaced, in the Computer Vision Pipeline 2.0, by Foundation Models, but it’s a possibility, not the current reality.
From conceptual shifts to the integration of cutting-edge models like CLIP, SAM, and GPT-4V, we show you how and why we believe Computer Vision is ready to land into uncharted territories, where a Data-Centric way remains the right approach to follow in setting up production-grade AI systems.
Table of contents
- What is different: Foundation Models
- The hype: CLIP + SAM + GPT-4V + [Whatever The Next Big Thing is]
- The current Computer Vision pipeline
- The new Computer Vision pipeline
- Be careful of what you read online
1. What is different: Foundation Models
1.1. What is a Foundation Model?
“We define foundation models as models trained on broad data (generally using self-supervision at scale) that can be adapted to a wide range of downstream tasks.” Stanford Institute for Human-Centered AI (HAI)
For instance, Grounding DINO [1], a zero-shot object detection model that extends DINO [2], can be employed to detect arbitrary objects in various contexts. This adaptability is attained by leveraging the knowledge and representations acquired during the foundational training.
1.2 The rise of Foundation Models in Computer Vision
Figure 1 details the evolution of Computer Vision from a Foundation Model angle. We can see four main categories: Traditional Models (1998), Textually Prompted Models (2021), Visually Prompted Models (2021), and Heterogeneous Models (2023).
For every group, there is one particular model that accelerated the speed of evolution 🚀. For instance, Traditional “Inputs <Model> Output” Models, have evolved from LeNet and progressesed through AlexNet [5], VGG [6], InceptionNet [7], ResNet [8], and accelerated with Vision Transformer (ViT) [9].
Despite some controversy with respect to whether or not Computer Vision is Dead. The truth is that Computer Vision is evolving in parallel to other major breakthroughs in Artificial Intelligence (e.g., LLMs, Diffusion Models).
1.3 Enters GPT-4V!
The current pinnacle of Vision in the Generative AI (GenAI) domain (given that Gemini isn’t fully available yet) is GPT-4V [4].
The following example shows how, given a query in the form of an image, this multi-modal Foundation Model can describe the provided image in very rich detail: perhaps even better than the average college student 😎!
- Input:
- Code:
- Output:
- 🤖 GPT-4V: “This image captures a high-energy moment of equestrian barrel racing. The focus is on a horse and rider duo executing a tight turn around a barrel. The horse is a bay with a shiny coat, and its muscular build is on full display. It has its ears pinned back, showing concentration or the strain of the tight turn, and it wears a bridle with a bit and reins. The rider is leaning into the turn, showing good balance and control. …”
2. The hype: CLIP + SAM + GPT-4V + [Whatever The Next Big Thing is]
2.1 The hype: computer vision is solved!
So, given that Foundation Models have taken the world by storm, the obvious question is now: Is Computer Vision finally solved?
One interesting way to answer this query is by looking at actionable answers! In other words: What are people out there creating with these new powerful building blocks? 🏗
Some argue that using GPT-4V in combination with other Foundation Models is enough to solve object detection. As we explain in the next section, the day to day production challenges of ML systems is less rosy.
Figure 4 reveals an instance where GPT-4V fails to count the number of hardhats correctly in 3 out of 5 attempts with images taken randomly from Google. The time spent in crafting a reliable prompt that works for 90% or more of the cases will likely be far greater than the time spent training a well-tested Computer Vision model for this task.
2.2 The reality: A tool is not a system
Despite the promising breakthroughs brought by Foundation Models, stacking together a multitude of models to solve a problem at scale is quite different from building a demo.
To begin with, two of the fundamental costs associated with transforming a demo into a production-grade system are:
- 1) Development: A Foundation Model might work well in general but what about your specific task or use-case? In this scenario you might need to fine-tune your Foundation Model. For instance, what if you work with satellite images and need to fine-tune CLIP for this task? Assuming 4 months of development and 2 months of testing, a single ML Engineer’s salary for this task might be around GBP 35,000. What if you need the job done in 3 months instead of 6 months? Will you be willing to pay the double of this amount?
- 2) Engineering: Connecting the inputs-outputs of “chained” Foundation Models to operate at high-speed and high-throughput requires your MLOps team to figure out a Foundation Model “connecting” pipeline. Foundation Models are great at zero-shot tasks, but assembling a line of models where the output of one stage can affect the input of the next stage is only one of the potential challenges to be addressed.
2.3 But Foundation Models are actually good enough!
This is absolutely true and exciting! Foundation Models are reshaping Computer Vision!
In the next two sections we describe the traditional Computer Vision pipeline, and the role we expect Foundation Models to play in a new generation of Computer Vision pipeline.
3. The traditional Computer Vision pipeline
Figure 4 shows the traditional Computer Vision pipeline from Data Collection to Model Monitoring. The illustration highlights the nature of each stage: Model-Centric, Data-Centric or Hardware.
In this current pipeline, the first three stages precede the main stage in a Model-Centric approach: Model Training. For many years, model-centric has been crucial to develop new architectures (e.g., ResNet, Transformers).
However, to build production grade ML systems, a Data-Centric approach has proven to be more effective. At least 6 of the 8 stages in this pipeline are Data-Centric in nature, meaning that they involve data at the core. For instance, Model Evaluation requires ML teams to identify the root causes of why a model is failing, so that it can be fixed before reaching the next stages. Also, Model Selection, often considered a Model-Centric stage, is, in reality, more closely tied to data: a profound understanding of how a model behaves varies with different slices of data.
We argue that, the rise of Foundation Models in Computer Vision might change the way this pipeline is being built though.
4. The new Computer Vision pipeline
Figure 5 shows the way we envision the Computer Vision pipeline 2.0. As this diagram reveals, Foundation Models are likely to eliminate two of the stages of the traditional pipeline:
- Data Annotation: Annotating data at scale (i.e. 1M+ datasets) is perhaps the second most expensive process after model training. Controlling the quality of the labels is often the hardest challenge. Foundation Models could improve auto-labelling, and overtime take this stage out of the CV pipeline.
- Model Training: Once you have an annotated dataset, you can actually train a model. Nowadays, training a high-quality model for a specific use-case in industry is expensive. Large size datasets often require distributed training as the norm, if you aim to train a fine quality model. Hence, Foundation Models are well positioned to takeover this stage of the CV pipeline too.
As a result of these changes, all stages except deployment are likely to lean towards a data-first approach. Hence, data remains the key to building AI systems in the real-world.
5. Be careful of what you read online
The Computer Vision pipeline 2.0 presents a transformative shift led by Foundation Models, which hold the promise to streamline the cumbersome process of annotating large datasets and training high-quality models.
Be aware that data is still king in the production-grade world of machine learning. Even if Foundation Models replace annotation and training, you still need to acquire data, pre-process data, select, and define slices of data for evaluation and model selection. Additionally, monitoring data drifts is crucial.
Not everything you read online is true! Rapid prototyping by combining a bunch of models can help you brainstorm some ideas, but a well structured Data-Centric approach to build ML systems remains the key to build robust AI systems for real-world applications.
References
[1] Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection
[2] Emerging Properties in Self-Supervised Vision Transformers
[3] Foundational Models Defining a New Era in Vision: A Survey and Outlook
[4] GPT-4V(ision) System Card
[5] ImageNet Classification with Deep Convolutional Neural Networks
[6] Very Deep Convolutional Networks For Large-Scale Iimage Recognition
[7] Going deeper with convolutions
[8] Deep Residual Learning for Image Recognition
[9] An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Authors: Jose Gabriel Islas Montero, Dmitry Kazhdan
If you would like to know more about Tenyks, sign up for a sandbox account.