The Self-Protection of the Internet World To Improve security, Security Protection Has No Boundaries, and Zero Trust Can Be Truly Safe
Knowledge

The Self-Protection of the Internet World To Improve security, Security Protection Has No Boundaries, and Zero Trust Can Be Truly Safe

Faced with various security threats, the internal network environment and the personal computers managed by the company are likely to be attacked and paralyzed, hacked, or even become the perpetrator. There is no distinction between inside and outside. Therefore, zero-trust management mode is imperative.
Published: Jun 12, 2020
The Self-Protection of the Internet World To Improve security, Security Protection Has No Boundaries, and Zero Trust Can Be Truly Safe

Pay more attention to network security in the Internet age

In the era of digitalization and networking, trust is one of the foundations for the operation of many systems, the interoperability of information, and the conduct of transactions. With the evolution of various technologies, the availability and usability of services continue to improve. How to ensure security, but it has always been a problem, and it is increasingly difficult to deal with, and it has even seriously threatened the normal operation of the real world.

According to the 2018 Global Risk Report released by the World Economic Forum, Cyber attacks ranked sixth among the top ten risks that caused serious impacts; and among the top ten possible major risks, cyber attacks entered the top three for the first time, second only to Extreme weather events are related to national disasters, and the fourth place is data fraud or theft, which is also closely related to security.

There are too many security threats to enumerate. Even after rising and facing recession, after a while, they are often recovered, evolved, or mixed with other threats, which is unpredictable. But overall, these threats can be rampant due to the rapid rise and popularization of emerging technologies, such as cloud services, mobile applications, and the Internet of Things. Moreover, the impact is overwhelming, and individuals and enterprises are irresistible and unable to resist. Stay out of the situation; at the same time, because of the wide-open door, threats across the inside and outside, such as: data leakage, DDoS, APT, privileged account abuse, etc., are also endless and difficult to completely cut off. Faced with this situation, we do not know what to believe, it is the current portrayal.

Intranet environment where boundaries no longer exist

In the design of traditional network security architecture, internally located users and application systems are generally considered to be in a trusted state. Therefore, fewer verification and verification mechanisms are implemented. However, nowadays, users and application systems are implemented in various places, and there is no distinction between inside and outside. At this time, when we look at security protection, we should adopt an attitude of continuous verification and not easy to trust.

Change in security thinking: from trust to zero trust

In the past, to strengthen information security, the IT industry has invested a lot of resources to try to enhance the trust of users. To trace the source of these efforts, we must first talk about Microsoft, they began to promote Trustworthy Computing in 2002.

At that time, the company's operating systems Windows NT 4.0 and Windows 2000 were abused because of their vulnerabilities, and suffered from network worm attacks such as Code, Nimda, and Blaster. Therefore, they painfully thought and actively strengthened the operating system and other software. Security, so since Windows XP SP2, built-in Windows Firewall, Windows Update automatically update, and promote the software development life cycle (SDL) security project.

Microsoft's follow-up in Windows Vista and Windows Server 2008, the built-in user account control (UAC) and network access protection (NAP) are also based on this concept, developed security features. In addition to Microsoft, AMD, HP, IBM, Intel and Microsoft also cooperated in 2003 to establish the Trusted Computing Group (TCG). They have promoted the development of several well-known security technologies, such as; more and more people in computers and servers Built-in Trusted Computing Module (TPM), Trusted Network Connection (TNC) providing network access control functions, and Standard Specification (SED) applied to the encryption application of the hard disk itself.

During those few years, the concepts of Layered Defense and Defense in Depth were quite popular, which attracted attention to the application of network access control (NAC), and many products of collaborative defense appeared on the market.

Later, with the rise of server virtualization platforms and cloud services, software-defined protection and cloud security products have also begun to rise to the stage, which can support the security defense strategy in these atypical network application environments implementation at that time.

For example, Check Point, a veteran network firewall manufacturer, launched Software Defined Protection in 2014, Shield, a server virtualization platform manufacturer VMware, which was released in 2011, and NSX acquired in 2012 from the acquisition of network virtualization vendor Nicira. It can provide the Network Segmentation function to reduce the possibility of horizontal attacks on the internal network. In large public cloud services, it also has multiple network partitioning mechanisms. In terms of AWS, Amazon VPC provides subnets, Security groups, network access control lists (NACLs).

In 2009, the chief analyst of the research institute Forrester proposed a new security model called Zero Trust Model, which received great attention at the time. However, there were not many security vendors that responded actively in the initial stage, mainly based on the next generation firewall the famous Palo Alto Networks. By 2015, Forrester found that the related application fields gradually expanded, covering virtualized network infrastructure (VNI) and Network orchestration solutions. In the fourth quarter of 2016, they significantly increased the threat mitigation technology based on the concept of zero trust to 20 types. However, Forrester's report released in January this year re-introduced a zero-trust product type and corresponding manufacturers. They set these parts as an extended ecosystem and distinguished them into a zero-trust platform, security automation and dispatch command, security perspective and analysis, and security levels such as personnel, workload, data, network, and networked equipment, and "staff "" is further subdivided into interactive processes and identity management. "Network" refers specifically to network partition isolation.

Network isolation is a more common structure of the zero-trust model. When Forrester started to advocate the zero-trust model in 2010, they also designed a zero-trust network architecture based on this concept, which included the integration of multiple controls and protections. Functional network isolation gateway (SG), micro-cores and boundaries (MCAP) that are divided into multiple isolation zones according to different operational attributes, a server responsible for the centralized management mechanism, and data acquisition responsible for collecting and analyzing network traffic Take the Internet (DAN).

The basic spirit of zero trust: No matter whether you trust or not, you must pass verification before counting

What exactly is zero trust? Why are more and more security vendors embracing this concept? In short, in a zero-trust network environment, all network traffic is untrusted. Therefore, security professionals must verify and protect all resources, restrict and strictly implement access control, and detect and record all Internet traffic.

According to John Kindervag's interpretation of the zero-trust network architecture in 2010, he put forward the following three core concepts: (1) on security devices, there is no longer a trust and untrust interface; (2) there is no longer a trust and Untrusted network; (3) There are no longer trusted and untrusted users.

If the zero trust model is extended to the level of information security, three basic concepts are proposed: (1) no matter where you are, you must ensure that all resources are accessed securely; (2) adopt the lowest authority Strategies, and strictly implement access control; (3) Detect and record all traffic.

In the above concept, we must assume that all network traffic is threatening. Unless they are verified by their security team to confirm that they are legally authorized, passed the test, and are protected, they will not be trusted. For example, data access between internal and external networks is often protected by encrypted channels. In contrast, cyber criminals can easily detect unencrypted data. Therefore, security experts have The protection attitude must be consistent with external data access on the Internet.

In the management of access behavior, the zero-trust model will also use more methods to verify and mitigate harm.

What we have relied on in the past is mainly protection based on user role-based access control mechanisms (RBAC), and is commonly used in network access control, infrastructure software, and identity and access management systems. If the zero-trust model is adopted, RBAC will not be the only method. The focus is on configuring the lowest access rights and implementing precise access control to reduce the impact of attacks and abuse. For example, according to the identity indicated by the user, appropriate access to resources is allocated, and they are limited to the range they need to perform these tasks, so that they can only do the right thing, instead of blindly trusting the user, without considering intention or negligence. And lead to the possibility of doing wrong.

Of course, there are no perfect and seamless restrictions in the world. To further ensure that things are done right, and to grasp the status of errors early, we also need to continuously detect and record. For example, we must have the analysis and analysis of network traffic activities. Perspective capability (NAV), and detection and recording can be carried out together to process all network traffic in an active and real-time manner, not passively and delayed for a while, and only for internal network traffic.

To meet such requirements, the zero trust model also specifically defines the characteristics that should be possessed for network traffic analysis and perspective. For example, it can flexibly expand the scale of use and the situational awareness of continuous execution, and includes network exploration (finding and Track assets), flow data analysis (analysis of traffic patterns and user behavior), packet capture and analysis, and network interpretation data analysis (analysis of network streaming packets), network forensics (to assist in emergency response and crime investigation).

In the design of zero-trust architecture, John Kindervag believes that four components need to be matched, namely: Network Segmentation Gateway (SG), Microcore and Perimeter (MCAP), centralized management, and Data Acquisition Network (DAN).

The SG will include functions that were originally dispersed in multiple security products, such as firewalls, network intrusion prevention systems, website application firewalls, network access control, content filtering gateways, VPN gateways, and other encryption products , And deeply integrated into the network level, at the same time, SG will also build a packet forwarding engine, so that it can be built in the center of the network. Security personnel can define common control policies in the SG, and must configure multiple high-speed network interfaces to undertake huge network traffic processing operations.

The SG here is quite similar to the UTM equipment we are familiar with. However, John Kindervag said that the part of UTM is the control of the network boundary. The location where the SG is deployed and targeted is at the center of the network.

MCAP is for each network partition. They are Switching Zones formed by connecting to the network interface of SG and other devices. These areas are set up with dedicated micro-core switches that share the same functions and policy control attributes. Each isolation zone becomes a Microperimeter. Security personnel can form a unified switching network by aggregating all the switches in MCAP to achieve the purpose of centralized management.

The so-called centralized management refers to the processing operations of the backplane of the network, which can be defined by the transparent and integrated MCAP overall management mechanism. Moreover, it must be upgraded from a command line operation method based on individual components to be centralized and easier to use. Management system.

As for the setting of DAN, it is to highlight the importance of capturing network data. It is also a kind of MCAP, which can deploy a security event management system (SIEM), network analysis and perspective tool (NAV), which is responsible for centralized acquisition and analysis. And log all network traffic.

Using such a network area, we can process all the traffic that passes through the SG, including the part that connects all MCAP-through mirroring and forwarding the traffic, effectively collect the packets, Syslog, SNMP and other information transmitted therein, and store it in In a single location, to facilitate the subsequent analysis and analysis of network traffic, to achieve near real-time processing requirements.

Attacks come from all directions, and the border division needs to be more detailed

In the zero-trust model, the threat of attack is more extended. Just like protecting the safety of the president, attention should be paid to the identity, location, and accessibility of the protected objects; the border control needs to set up multiple layers of partitions, such as peripheral fences and military police, and special services are surrounding the president’s car, forming For micro-boundary protection, it is also necessary to guard against remote sniper attacks and do well in endpoint control.

Application control strategy based on zero trust model

Depending on the motivation, the level of threat also changes. However, whether it is malicious or well-intentioned, endpoint devices and applications may become threats. Therefore, to deal with different levels of threats, blacklists and whitelists can be used to control However, in relatively normal and legal endpoints and applications, we can use isolation to enforce protection with the least-privileged access method.

The zero-trust model has begun to expand significantly, and can be applied at more levels

In the zero-trust research report released by Forrester at the beginning, there is a lot of attention to the network architecture. Also, network partition isolation does play a very important role here. Therefore, in recent years, whenever this concept is adopted, for product types, most people will think of next-generation firewalls. However, zero trust is not just network isolation, but the overall protection strategy, involving processing processes and technologies. Therefore, when Forrester elaborated the zero-trust model in recent years, it did not Restricted to the Internet, but extend more protection.

For example, at the beginning of this year, they specifically proposed the concept of Zero Trust eXtended Ecosystem, disassembled the zero trust model, centered on data, surrounded by four links such as personnel, equipment, network, and workload, and used the network Perspective and analysis, automation and dispatching and other technologies come together.

The number of companies that recognize and value zero trust has increased, and both Google and Netflix have begun to adopt

In addition to the research institutions that have long advocated this concept, there has been a major change in market views, and the zero-trust model has received a lot of attention recently. There are several reasons for this.

One of them is related to Google. The company's chief information officer said that large enterprises should establish a "zero trust" infrastructure. Google also released a research paper "BeyondCorp: A New Approach to Enterprise Security" this year, and subsequently published several papers that explored the concept and implementation of BeyondCorp. Since 2017, Google has provided new cloud services in this self-developed zero-trust security framework, and has begun to actively promote it. For example, at the Google Cloud Next conference, Cloud Identity-Aware Proxy ( Cloud IAP), allowing users to build a content-aware secure access environment described by BeyondCorp; Google is proud of this research project that began in 2011, claiming to allow most employees to work on untrusted networks every day In the road environment, you can work safely without using VPN protection.

In addition to Google, recently, we also saw another network company that built a zero-trust architecture, which is the well-known online audio and video industry Netflix.

At the Usenix Enigma conference earlier this year, the company's senior security engineer Bryan Zimmer publicly introduced their zero-trust architecture, called Location Independent Security Approach (LISA), which contains three basic principles: (1) The key to trust is to use The identity of the person and the health status of the endpoint, not the location; (2) The office network cannot be trusted; (3) The device must be isolated.

In the long run, to improve the overall security protection, there should be more organizations joining the ranks of implementing the zero-trust model. With the continuous promotion of the new wave of digital transformation, the operation of application systems and services is bound to move towards a more open environment If we continue to use the traditional dichotomy (internal, external, trust, distrust) to define the scope of protection of information security, I am afraid that it will be increasingly difficult to cope with this unknown situation.

If you can thoroughly understand the reality, think and construct a zero-trust model suitable for your operating environment, through continuous verification, recording, and continuous analysis and supervision, you can effectively implement "control" and "protection" instead of defensive strategies. Only by focusing on either end, we can also get a solid guarantee.

Google invests in the development of a zero-trust model

The zero-trust model is not only advocated by information security vendors. Cloud service giant Google has invested. They have spent six years developing a zero-trust security framework called BeyondCorp. In terms of access control, they have strengthened the protection from the original network boundary to the control of individual devices and users. At present, they can allow their employees to connect to the company's application system from anywhere. There is no need for encrypted connections via traditional VPN.

Published by Jun 12, 2020 Source :ithome

Further reading

You might also be interested in ...

Headline
Knowledge
Understanding the Logistics and Transportation Sector
As the number of e-commerce transactions continues to surge, there is a parallel increase in the demand for logistics services. Amidst the ongoing transformation and upgrade of the industry, the integration of smart technology has emerged as a pivotal factor in driving its development.
Headline
Knowledge
Understanding Mechanism Design and Its Practical Applications
Creating an effective mechanism design entails thoughtful consideration of factors such as materials, specifications, precision, manufacturing processes, and functionality. Moreover, it must be cost-effective to ensure the development of a successful mechanism design.
Headline
Knowledge
Understanding the Granulation Process
Plastic granulation technology plays a pivotal role in the manufacturing of plastic products and the recycling of resources. It is employed to produce a diverse range of plastic products or raw materials, offering both environmental and economic advantages.
Headline
Knowledge
What Constitutes Contemporary Architectural Frameworks for Robotic Computing?
The behavior of robots is frequently modeled as a computational graph, wherein data flows from sensors to computational technology, extending to actuators and then looping back. To enhance performance capabilities, robotic computing platforms need to adeptly map these graph-like structures to CPUs and specialized hardware, such as FPGAs and GPUs.
Headline
Knowledge
How Does the Electroplating Process Work for ABS Plastic?
Over the past few years, plastic electroplating has gained widespread popularity, particularly in the decorative electroplating of plastic components. Among the various types of plastic utilized in electroplating, ABS plastic stands out as the most extensively employed.
Headline
Knowledge
What Are the Fundamentals and Benefits of Choosing between Liquid and Powder Coating?
Metal fabricators aiming to venture into finishing processes should familiarize themselves with two prevalent options—liquid and powder coating—along with the prerequisites necessary for a company seeking to employ either or both.
Headline
Knowledge
Anticipating the Emerging Trends in the Global Laser Industry
In light of the evolving global industry demands, the laser industry and technology are shifting their focus towards meeting the requirements of 5G semiconductors and smart vehicle processes. While Taiwan's laser industry has a well-established foundation built over the years, sustaining international competitiveness necessitates a proactive advancement in independent laser technology.
Headline
Knowledge
What is the Purpose of Surface Treatment for Metals?
Defects are bound to arise during the reprocessing of mechanical equipment or parts related to metal. As a result, the final item's surface treatment process plays a crucial role, serving the dual purpose of enhancing aesthetics and providing protection. This process not only improves the performance of metal parts but also helps prevent rust.
Headline
Knowledge
Selecting a Hydraulic Press and Understanding its Manufacturing Procedure
Hydraulic presses find applications in compaction, assembly, pressing, forming, embossing, and stretching. They play a crucial role in compaction within the cosmetics sector, assembly in the automotive industry, molding of electronic products, and stamping in the home appliance industry.
Headline
Knowledge
Introduction to RFID Tags: The Significance of RFID in Modern Retail Supply Chains
There are two types of RFID systems: passive and active. For those unfamiliar with RFID, you may be curious about the distinctions between these types and which one suits your application best. In the following, we offer a brief explanation.
Headline
Knowledge
Anticipating the Emerging Trends in the Global Laser Industry
In light of the evolving global industrial demands, the laser industry and technology are shifting towards meeting the requirements of 5G semiconductors and advanced processes for smart vehicles. While Taiwan's laser industry has made substantial progress over the years, maintaining alignment with international advancements necessitates a proactive push in independent laser technology.
Headline
Knowledge
Exploring Sheet Metal: Defining Sheet Metal and its Various Applications, with a Focus on Laser Applications
The term originates from English, known as plate metal. Typically, certain metal sheets undergo plastic deformation either manually or through die-stamping to attain the desired shape and size. These sheets can then undergo additional shaping through welding or a limited amount of mechanical processing to create more intricate components.
Agree