The Self-Protection of the Internet World To Improve security, Security Protection Has No Boundaries, and Zero Trust Can Be Truly Safe

The Self-Protection of the Internet World To Improve security, Security Protection Has No Boundaries, and Zero Trust Can Be Truly Safe

Faced with various security threats, the internal network environment and the personal computers managed by the company are likely to be attacked and paralyzed, hacked, or even become the perpetrator. There is no distinction between inside and outside. Therefore, zero-trust management mode is imperative.
Published: Jun 12, 2020
The Self-Protection of the Internet World To Improve security, Security Protection Has No Boundaries, and Zero Trust Can Be Truly Safe

Pay more attention to network security in the Internet age

In the era of digitalization and networking, trust is one of the foundations for the operation of many systems, the interoperability of information, and the conduct of transactions. With the evolution of various technologies, the availability and usability of services continue to improve. How to ensure security, but it has always been a problem, and it is increasingly difficult to deal with, and it has even seriously threatened the normal operation of the real world.

According to the 2018 Global Risk Report released by the World Economic Forum, Cyber attacks ranked sixth among the top ten risks that caused serious impacts; and among the top ten possible major risks, cyber attacks entered the top three for the first time, second only to Extreme weather events are related to national disasters, and the fourth place is data fraud or theft, which is also closely related to security.

There are too many security threats to enumerate. Even after rising and facing recession, after a while, they are often recovered, evolved, or mixed with other threats, which is unpredictable. But overall, these threats can be rampant due to the rapid rise and popularization of emerging technologies, such as cloud services, mobile applications, and the Internet of Things. Moreover, the impact is overwhelming, and individuals and enterprises are irresistible and unable to resist. Stay out of the situation; at the same time, because of the wide-open door, threats across the inside and outside, such as: data leakage, DDoS, APT, privileged account abuse, etc., are also endless and difficult to completely cut off. Faced with this situation, we do not know what to believe, it is the current portrayal.

Intranet environment where boundaries no longer exist

In the design of traditional network security architecture, internally located users and application systems are generally considered to be in a trusted state. Therefore, fewer verification and verification mechanisms are implemented. However, nowadays, users and application systems are implemented in various places, and there is no distinction between inside and outside. At this time, when we look at security protection, we should adopt an attitude of continuous verification and not easy to trust.

Change in security thinking: from trust to zero trust

In the past, to strengthen information security, the IT industry has invested a lot of resources to try to enhance the trust of users. To trace the source of these efforts, we must first talk about Microsoft, they began to promote Trustworthy Computing in 2002.

At that time, the company's operating systems Windows NT 4.0 and Windows 2000 were abused because of their vulnerabilities, and suffered from network worm attacks such as Code, Nimda, and Blaster. Therefore, they painfully thought and actively strengthened the operating system and other software. Security, so since Windows XP SP2, built-in Windows Firewall, Windows Update automatically update, and promote the software development life cycle (SDL) security project.

Microsoft's follow-up in Windows Vista and Windows Server 2008, the built-in user account control (UAC) and network access protection (NAP) are also based on this concept, developed security features. In addition to Microsoft, AMD, HP, IBM, Intel and Microsoft also cooperated in 2003 to establish the Trusted Computing Group (TCG). They have promoted the development of several well-known security technologies, such as; more and more people in computers and servers Built-in Trusted Computing Module (TPM), Trusted Network Connection (TNC) providing network access control functions, and Standard Specification (SED) applied to the encryption application of the hard disk itself.

During those few years, the concepts of Layered Defense and Defense in Depth were quite popular, which attracted attention to the application of network access control (NAC), and many products of collaborative defense appeared on the market.

Later, with the rise of server virtualization platforms and cloud services, software-defined protection and cloud security products have also begun to rise to the stage, which can support the security defense strategy in these atypical network application environments implementation at that time.

For example, Check Point, a veteran network firewall manufacturer, launched Software Defined Protection in 2014, Shield, a server virtualization platform manufacturer VMware, which was released in 2011, and NSX acquired in 2012 from the acquisition of network virtualization vendor Nicira. It can provide the Network Segmentation function to reduce the possibility of horizontal attacks on the internal network. In large public cloud services, it also has multiple network partitioning mechanisms. In terms of AWS, Amazon VPC provides subnets, Security groups, network access control lists (NACLs).

In 2009, the chief analyst of the research institute Forrester proposed a new security model called Zero Trust Model, which received great attention at the time. However, there were not many security vendors that responded actively in the initial stage, mainly based on the next generation firewall the famous Palo Alto Networks. By 2015, Forrester found that the related application fields gradually expanded, covering virtualized network infrastructure (VNI) and Network orchestration solutions. In the fourth quarter of 2016, they significantly increased the threat mitigation technology based on the concept of zero trust to 20 types. However, Forrester's report released in January this year re-introduced a zero-trust product type and corresponding manufacturers. They set these parts as an extended ecosystem and distinguished them into a zero-trust platform, security automation and dispatch command, security perspective and analysis, and security levels such as personnel, workload, data, network, and networked equipment, and "staff "" is further subdivided into interactive processes and identity management. "Network" refers specifically to network partition isolation.

Network isolation is a more common structure of the zero-trust model. When Forrester started to advocate the zero-trust model in 2010, they also designed a zero-trust network architecture based on this concept, which included the integration of multiple controls and protections. Functional network isolation gateway (SG), micro-cores and boundaries (MCAP) that are divided into multiple isolation zones according to different operational attributes, a server responsible for the centralized management mechanism, and data acquisition responsible for collecting and analyzing network traffic Take the Internet (DAN).

The basic spirit of zero trust: No matter whether you trust or not, you must pass verification before counting

What exactly is zero trust? Why are more and more security vendors embracing this concept? In short, in a zero-trust network environment, all network traffic is untrusted. Therefore, security professionals must verify and protect all resources, restrict and strictly implement access control, and detect and record all Internet traffic.

According to John Kindervag's interpretation of the zero-trust network architecture in 2010, he put forward the following three core concepts: (1) on security devices, there is no longer a trust and untrust interface; (2) there is no longer a trust and Untrusted network; (3) There are no longer trusted and untrusted users.

If the zero trust model is extended to the level of information security, three basic concepts are proposed: (1) no matter where you are, you must ensure that all resources are accessed securely; (2) adopt the lowest authority Strategies, and strictly implement access control; (3) Detect and record all traffic.

In the above concept, we must assume that all network traffic is threatening. Unless they are verified by their security team to confirm that they are legally authorized, passed the test, and are protected, they will not be trusted. For example, data access between internal and external networks is often protected by encrypted channels. In contrast, cyber criminals can easily detect unencrypted data. Therefore, security experts have The protection attitude must be consistent with external data access on the Internet.

In the management of access behavior, the zero-trust model will also use more methods to verify and mitigate harm.

What we have relied on in the past is mainly protection based on user role-based access control mechanisms (RBAC), and is commonly used in network access control, infrastructure software, and identity and access management systems. If the zero-trust model is adopted, RBAC will not be the only method. The focus is on configuring the lowest access rights and implementing precise access control to reduce the impact of attacks and abuse. For example, according to the identity indicated by the user, appropriate access to resources is allocated, and they are limited to the range they need to perform these tasks, so that they can only do the right thing, instead of blindly trusting the user, without considering intention or negligence. And lead to the possibility of doing wrong.

Of course, there are no perfect and seamless restrictions in the world. To further ensure that things are done right, and to grasp the status of errors early, we also need to continuously detect and record. For example, we must have the analysis and analysis of network traffic activities. Perspective capability (NAV), and detection and recording can be carried out together to process all network traffic in an active and real-time manner, not passively and delayed for a while, and only for internal network traffic.

To meet such requirements, the zero trust model also specifically defines the characteristics that should be possessed for network traffic analysis and perspective. For example, it can flexibly expand the scale of use and the situational awareness of continuous execution, and includes network exploration (finding and Track assets), flow data analysis (analysis of traffic patterns and user behavior), packet capture and analysis, and network interpretation data analysis (analysis of network streaming packets), network forensics (to assist in emergency response and crime investigation).

In the design of zero-trust architecture, John Kindervag believes that four components need to be matched, namely: Network Segmentation Gateway (SG), Microcore and Perimeter (MCAP), centralized management, and Data Acquisition Network (DAN).

The SG will include functions that were originally dispersed in multiple security products, such as firewalls, network intrusion prevention systems, website application firewalls, network access control, content filtering gateways, VPN gateways, and other encryption products , And deeply integrated into the network level, at the same time, SG will also build a packet forwarding engine, so that it can be built in the center of the network. Security personnel can define common control policies in the SG, and must configure multiple high-speed network interfaces to undertake huge network traffic processing operations.

The SG here is quite similar to the UTM equipment we are familiar with. However, John Kindervag said that the part of UTM is the control of the network boundary. The location where the SG is deployed and targeted is at the center of the network.

MCAP is for each network partition. They are Switching Zones formed by connecting to the network interface of SG and other devices. These areas are set up with dedicated micro-core switches that share the same functions and policy control attributes. Each isolation zone becomes a Microperimeter. Security personnel can form a unified switching network by aggregating all the switches in MCAP to achieve the purpose of centralized management.

The so-called centralized management refers to the processing operations of the backplane of the network, which can be defined by the transparent and integrated MCAP overall management mechanism. Moreover, it must be upgraded from a command line operation method based on individual components to be centralized and easier to use. Management system.

As for the setting of DAN, it is to highlight the importance of capturing network data. It is also a kind of MCAP, which can deploy a security event management system (SIEM), network analysis and perspective tool (NAV), which is responsible for centralized acquisition and analysis. And log all network traffic.

Using such a network area, we can process all the traffic that passes through the SG, including the part that connects all MCAP-through mirroring and forwarding the traffic, effectively collect the packets, Syslog, SNMP and other information transmitted therein, and store it in In a single location, to facilitate the subsequent analysis and analysis of network traffic, to achieve near real-time processing requirements.

Attacks come from all directions, and the border division needs to be more detailed

In the zero-trust model, the threat of attack is more extended. Just like protecting the safety of the president, attention should be paid to the identity, location, and accessibility of the protected objects; the border control needs to set up multiple layers of partitions, such as peripheral fences and military police, and special services are surrounding the president’s car, forming For micro-boundary protection, it is also necessary to guard against remote sniper attacks and do well in endpoint control.

Application control strategy based on zero trust model

Depending on the motivation, the level of threat also changes. However, whether it is malicious or well-intentioned, endpoint devices and applications may become threats. Therefore, to deal with different levels of threats, blacklists and whitelists can be used to control However, in relatively normal and legal endpoints and applications, we can use isolation to enforce protection with the least-privileged access method.

The zero-trust model has begun to expand significantly, and can be applied at more levels

In the zero-trust research report released by Forrester at the beginning, there is a lot of attention to the network architecture. Also, network partition isolation does play a very important role here. Therefore, in recent years, whenever this concept is adopted, for product types, most people will think of next-generation firewalls. However, zero trust is not just network isolation, but the overall protection strategy, involving processing processes and technologies. Therefore, when Forrester elaborated the zero-trust model in recent years, it did not Restricted to the Internet, but extend more protection.

For example, at the beginning of this year, they specifically proposed the concept of Zero Trust eXtended Ecosystem, disassembled the zero trust model, centered on data, surrounded by four links such as personnel, equipment, network, and workload, and used the network Perspective and analysis, automation and dispatching and other technologies come together.

The number of companies that recognize and value zero trust has increased, and both Google and Netflix have begun to adopt

In addition to the research institutions that have long advocated this concept, there has been a major change in market views, and the zero-trust model has received a lot of attention recently. There are several reasons for this.

One of them is related to Google. The company's chief information officer said that large enterprises should establish a "zero trust" infrastructure. Google also released a research paper "BeyondCorp: A New Approach to Enterprise Security" this year, and subsequently published several papers that explored the concept and implementation of BeyondCorp. Since 2017, Google has provided new cloud services in this self-developed zero-trust security framework, and has begun to actively promote it. For example, at the Google Cloud Next conference, Cloud Identity-Aware Proxy ( Cloud IAP), allowing users to build a content-aware secure access environment described by BeyondCorp; Google is proud of this research project that began in 2011, claiming to allow most employees to work on untrusted networks every day In the road environment, you can work safely without using VPN protection.

In addition to Google, recently, we also saw another network company that built a zero-trust architecture, which is the well-known online audio and video industry Netflix.

At the Usenix Enigma conference earlier this year, the company's senior security engineer Bryan Zimmer publicly introduced their zero-trust architecture, called Location Independent Security Approach (LISA), which contains three basic principles: (1) The key to trust is to use The identity of the person and the health status of the endpoint, not the location; (2) The office network cannot be trusted; (3) The device must be isolated.

In the long run, to improve the overall security protection, there should be more organizations joining the ranks of implementing the zero-trust model. With the continuous promotion of the new wave of digital transformation, the operation of application systems and services is bound to move towards a more open environment If we continue to use the traditional dichotomy (internal, external, trust, distrust) to define the scope of protection of information security, I am afraid that it will be increasingly difficult to cope with this unknown situation.

If you can thoroughly understand the reality, think and construct a zero-trust model suitable for your operating environment, through continuous verification, recording, and continuous analysis and supervision, you can effectively implement "control" and "protection" instead of defensive strategies. Only by focusing on either end, we can also get a solid guarantee.

Google invests in the development of a zero-trust model

The zero-trust model is not only advocated by information security vendors. Cloud service giant Google has invested. They have spent six years developing a zero-trust security framework called BeyondCorp. In terms of access control, they have strengthened the protection from the original network boundary to the control of individual devices and users. At present, they can allow their employees to connect to the company's application system from anywhere. There is no need for encrypted connections via traditional VPN.

Published: Jun 12, 2020 Source :ithome

Further reading

You might also be interested in ...

What are the Components of Automotive Semiconductors?
MCU has a wide range of terminal applications, including home appliance control, automotive electronics, education and entertainment, medical equipment, etc. Among them, automotive electronics and the Internet of Things will be the main kinetic energy of the MCU industry.
The Pursuit of Low-Error and High-Precision Machine Precision
To maintain the accuracy of finished products in the manufacturing industry, rapid detection and adjustment of machine tool performance are necessary work.
Metal Lamination Manufacturing Technology from Subtraction to Addition-3D Intelligent Manufacturing
The global manufacturing industry is moving towards intelligent manufacturing, and Industry 4.0 drives the manufacturing industry towards a trend of high efficiency, low cost, and intelligent and flexible production. The nine technologies of Industry 4.0 include Big Data, Laminated Manufacturing, Cloud Technology, Automation, System Integration, Internet of Things, Cyber Security, Augmented Reality, and Simulation.
What is Natural Language Processing Technology?
Natural Language Processing (NLP) is a subfield of computer science and artificial intelligence that focuses on how to get computers to process and analyze large amounts of human natural language data. Common NLP challenges are speech recognition, natural language understanding, machine translation, and natural language generation.
What is NFT?
The NFT market is growing explosively. In January 2021 alone, the transaction volume of NFTs will reach as high as 200 million US dollars, compared with 250 million US dollars in the whole year of 2020.
What is the composition and working principles of the PHEV braking energy recovery system?
With the development of the electric vehicle market, the braking energy recovery system has received extensive attention, and the driving ability can be improved through the braking energy recovery system.
High Efficiency and Low NVH Power Motor Technology
When car consumers buy vehicles, in addition to considering the performance of the car, such as power, handling, and safety, the highly-rated NVH performance, which can provide driving and passenger comfort, has become a factor for consumers to buy a car. In the process of vehicle development, all developing car manufacturers need to evaluate the performance of competing models to develop new models with higher performance.
What is Nanotechnology?
The so-called nanotechnology refers to the measurement, simulation, manipulation, and production of materials less than 100 nanometers.
What is FinTech?
FinTech abbreviated as FinTech is financial service innovation, which generally refers to an economic industry formed by a group of companies using technological means to make financial services more efficient.
What is A Printed Circuit Board (PCB)?
All electronic products must use the Printed Circuit Board (PCB) to fix the integrated circuit (IC) and other electronic components, and all the integrated circuits and electronic components with different functions are connected with thin copper wires to provide a stable working environment so that electronic signals can circulate between different electronic components.
What Is Flash Memory? What Is the Difference Between a Hard Disk Drive?
From the perspective of capacity, the flash memory type has a smaller capacity than the hard disk type, but the flash memory type has a faster reading speed than the hard disk type, and the flash memory type has a stronger anti-vibration ability. You can choose the appropriate one according to your needs. If you are only listening to songs, it is recommended to buy a flash memory type, which does not require too much capacity. If you want to use it to watch a movie, it is recommended to use a hard disk, otherwise, the movie cannot be saved.
What Is the Function of Vernier Calipers?
One of the tools a mechanic must use is a caliper. Although not as accurate as a micrometer, the caliper is a convenient and practical tool that can help mechanics quickly measure parts and use the measurement results to evaluate subsequent processing.