How do Convolutional Neural Networks Work?
Trend

How do Convolutional Neural Networks Work?

Breakthroughs in deep learning in recent years have come from the development of Convolutional Neural Networks (CNNs or ConvNets). It is the main force in the development of the deep neural network field, and it can even be more accurate than humans in image recognition.
Published: Oct 06, 2022
How do Convolutional Neural Networks Work?

What is a Convolutional Neural Network?

Convolutional Neural Network is a feed-forward neural network whose artificial neurons can respond to surrounding units within a partial coverage area and has excellent performance for large-scale image processing. A convolutional neural network consists of one or more convolutional layers and a top fully connected layer, as well as associated weights and pooling layers. This structure enables convolutional neural networks to exploit the two-dimensional structure of the input data. Compared to other deep learning architectures, convolutional neural networks can give better results in image and speech recognition. This model can also be trained using the backpropagation algorithm. Compared to other deep, feed-forward neural networks, convolutional neural networks have fewer parameters to consider, making them an attractive deep learning architecture.

The Convolutional Neural Network is powerful in image recognition, and many image recognition models are also extended based on the CNN architecture. It is also worth mentioning that the CNN model is a deep learning model established by referring to the visual organization of the human brain. Learning CNN will help me learn other deep learning models.

Feature:

CNN compares parts of the image, which are called features. By comparing rough features at similar locations, CNNs are better at distinguishing whether images are the same or not, rather than comparing whole images. Each feature in an image is like a smaller image, that is, a smaller two-dimensional matrix and these features capture common elements in the image.

Convolution:

Whenever a CNN resolves a new image, without knowing where the above features are, the CNN compares anywhere in the image. To calculate how many matching features are in the whole image, we create a filtering mechanism here. The mathematical principle behind this mechanism is called convolution, which is where the name CNN comes from.

The basic principle of convolution is to calculate the degree of conformity between the feature and the image part, if the value of each pixel of the two is multiplied, and then the sum is divided by the number of pixels. If every pixel of the two images matches, sum these products and divide by the number of pixels to get 1. Conversely, if the two pixels are completely different, you will get -1. By repeating the above process and summarizing various possible features in the image, convolution can be completed. Based on the values and positions of each convolution, make a new 2D matrix. This is the original image filtered by the feature, which can tell us where to find the feature in the original image. The part with a value closer to 1 is more consistent with the feature, the closer the value is to -1, the greater the difference; as for the part with a value close to 0, there is almost no similarity at all. The next step is to apply the same method to different features, and convolutions in various parts of the image. Finally, we will get a set of filtered original images, each of which corresponds to a feature. Simply think of the entire convolution operation as a single processing step. In the operation of CNNs, this step is called a convolutional layer, which means that there are more layers to follow.

The operation principle of CNN is computationally intensive. While we can explain how a CNN works on just one piece of paper, the number of additions, multiplications, and divisions can increase quickly along the way. With so many factors affecting the number of computations, the problems that CNN's deal with can become complex with little effort, and it is no wonder that some chipmakers are designing and building specialized chips for the computational demands of CNNs.

Pooling:

Pooling is a method of compressing images and retaining important information. Its working principle can be understood with only a second degree in mathematics. Pooling will select different windows on the image, and select a maximum value within this window range. In practice, a square with a side length of two or three is an ideal setting with a two-pixel stride.

After the original image is pooled, the number of pixels it contains will be reduced to a quarter of the original, but because the pooled image contains the maximum value of each range in the original image, it still retains each range and each range. The degree of conformity of the characteristics. The pooled information is more focused on whether there are matching features in the image, rather than where these features exist in the image. Can help CNN to determine whether a feature is included in the image without having to be distracted by the location of the feature.

The function of the pooling layer is to pool one or some pictures into smaller pictures. We end up with an image with the same number of pixels, but with fewer pixels. Helps to improve the computationally expensive problem just mentioned. Reducing an 8-megapixel image to 2 megapixels beforehand can make subsequent work easier.

Linear rectifier unit:

An important step in the CNN is the Rectified Linear Unit (ReLU), which mathematically converts all negative numbers on the image to 0. This trick prevents CNNs from approaching 0 or infinity. The result after linear rectification will have the same number of pixels as the original image, except that all negative values will be replaced with zeros.

Deep learning:

After being filtered, rectified, and pooled, the original image will become a set of smaller images containing feature information. These images can then be filtered and compressed again, and their features will become more complex with each processing, and the images will become smaller. The final, lower-level processing layers contain simple features such as corners or light spots. Higher-order processing layers contain more complex features, such as shapes or patterns, and these higher-order features are usually well-recognized.

Fully connected layer:

Fully connected layers will collect the filtered pictures at a high-level, and convert this feature information into votes. In the traditional neural network architecture, the role of the fully connected layer is the main primary building block. When we input an image to this unit, it treats all pixel values as a one-dimensional list, rather than the previous two-dimensional matrix. Each value in the list determines whether the symbol in the picture is a circle or a cross. Since some values are better at discriminating forks and others are better at discriminating circles, these values will get more votes than others. The number of votes cast by all values for different options will be expressed in terms of weight or connection strength. So, every time CNN judges a new image, the image will go through many lower layers before reaching the fully connected layer. After voting, the option with the most votes will become the category for this image.

Like other layers, multiple fully-connected layers can be combined because their inputs (lists) and outputs (votes) are in similar forms. In practice, it is possible to combine multiple fully-connected layers, with several virtual, hidden voting options appearing on several of them. Whenever add a fully connected layer, the entire neural network can learn more complex feature combinations and make more accurate judgments.

Backpropagation:

The machine learning trick of backpropagation can help us decide the weights. To use backpropagation, need to prepare some pictures that already have the answer, and then must prepare an untrained CNN where the values of any pixels, features, weights, and fully connected layers are randomly determined. You can train this CNN with a labeled image.

After CNN processing, each image will eventually have a round of the election to determine the category. Compared with the previously marked positive solution, it is the identification error. By adjusting the features and weights, the error generated by the election is reduced. After each adjustment, these features and weights are fine-tuned a little higher or lower, the error is recalculated, and the adjustments that successfully reduced the error are retained. So, when we adjust each pixel in the convolutional layer and each weight in the fully connected layer, we can get a set of weights that are slightly better at judging the current image. Then repeat the above steps to identify more tagged images. During the training process, misjudgments in individual pictures will pass, but common features and weights in these pictures will remain. If there are enough labeled images, the values of these features and weights will eventually approach a steady state that is good at recognizing most images. But backpropagation is also a very computationally expensive step.

Hyperparameters:

  • How many features should be in each convolutional layer? How many pixels should be in each feature?
  • What is the window size in each pooling layer? How long should the interval be?
  • How many hidden neurons (options) should each additional fully connected layer have?

In addition to these issues, we need to consider many high-level structural issues, such as how many processing layers should be in a CNN and in what order. Some deep neural networks may include thousands of processing layers, and there are many design possibilities. With so many permutations, we can only test a small subset of the CNN settings. Therefore, the design of CNN usually evolves with the knowledge accumulated by the machine learning community, and occasionally there are some unexpected improvements in performance. In addition, many improvement techniques have been tested and found to be effective, such as using new processing layers or connecting different processing layers in more complex ways.

Published by Oct 06, 2022 Source :mcknote

Further reading

You might also be interested in ...

Headline
Trend
Why RF Filters Matter More in Satellite Systems After 2026
As the global satellite communications industry continues to expand beyond 2026, competition is no longer defined only by the number of satellites in orbit. Buyers, project owners, system integrators, and engineering teams are now paying closer attention to link quality, interference control, spectrum efficiency, and long-term system reliability. In this context, RF filters are evolving from basic supporting components into critical decision points in satellite system design and procurement. Recent industry signals show that several forces are reshaping demand at the same time: the continued growth of LEO constellations, the development of 5G NTN, stronger expectations for resilient communications, and a more crowded spectrum environment. Together, these trends are increasing the strategic importance of RF front-end design, especially RF filters.
Headline
Trend
REACH, RoHS, And ESG: What Buyers Must Verify In Rubber Parts Suppliers
Global sourcing standards for rubber components have changed. Price, lead time, and dimensional accuracy are still important, but they are no longer enough on their own. Buyers now need clear proof that materials meet environmental requirements, production records can be traced, and supporting documents are available when needed. If a supplier cannot provide that visibility, the risk does not disappear—it simply moves downstream into qualification delays, shipment issues, customer complaints, or compliance failures.
Headline
Trend
Self Adhesive Magnetic Sheet: Market Trends, Material Knowledge, and B2B Buying Priorities
How Self Adhesive Magnetic Sheet Is Shaping Flexible Display and Labeling Applications
Headline
Trend
Why Natural Stretch Fabrics Are Emerging as a New Textile Trend
As brands look for lower synthetic content, simpler material composition, and more responsible sourcing options, natural stretch fabrics are gaining attention across apparel development and textile supply chains.
Headline
Trend
Aluminum Forging in 2026: Market Growth, Key Applications and Buyer Considerations
Market Outlook, Key Applications, and Strategic Sourcing Considerations for Global Buyers
Headline
Trend
Sugar Reduction and Plant Based Beverage Reformulation: Why Soy Milk Powder Is Gaining Attention in 2026
How sugar reduction, plant based demand, and private label development are reshaping powdered beverage formulation in 2026
Headline
Trend
Commercial Vehicle Growth Is Lifting DOT Air Fitting Demand
Market Outlook, Procurement Priorities, and Supplier Evaluation for DOT Air Fittings in Commercial Vehicles
Headline
Trend
Robotic Coffee Arms in F&B Retail Why Automated Beverage Service Is Expanding
How robotic coffee arms are entering F&B retail as a practical format for consistency, uptime, and space efficiency
Headline
Trend
Pineapple Leaf Fiber Yarn Specifications: A Practical Guide for Textile Buyers
PALF yarn is a natural textile material made from agricultural by-products. This article explains its key properties, including fiber length, strength, moisture behavior, and blending performance. It also outlines practical considerations for textile manufacturing and sourcing, helping buyers evaluate its suitability for different production needs.
Headline
Trend
Drinking Water Treatment Trends in 2026: Why PFAS, Microplastics, and Smarter Purification Standards Are Reshaping the Market
As PFAS regulation tightens and microplastics concerns grow, the global drinking water treatment market is shifting toward higher purification standards and more performance-focused systems.
Headline
Trend
Why Beverage Powder Brands Are Looking Beyond Price When Choosing Manufacturing Partners
In a more volatile market, beverage powder brands are rethinking how they evaluate suppliers. Price still matters, but more companies are prioritizing stability, development support, and long-term manufacturing alignment.
Headline
Trend
How Rising Material Costs Are Changing Tracheostomy Tube Sourcing Trends
Rising costs are changing more than pricing expectations. They are also reshaping how the market evaluates supply continuity, product breadth, and long-term sourcing stability.
Agree