blockchain game 7

Panoramic Analysis of Web 3.0 Technical Module: which modules need to be revamped?

I have previously written an article that analyzes the role of decentralized storage in the Web3.0 architecture and the current development of the decentralized storage header project Filecoin. This time, I will analyze the current architecture of Web2.0: if you want to fully land the idea of Web3.0 in the future, you need to transform the current technical components and how to do so.

Before we start the analysis, let’s talk about what Web 3.0 is. Up to now, the development of the Internet has gone through two stages: the Web1.0 stage and the Web2.0 stage. Most of the websites in the Web1.0 stage are static websites, there is no interaction between network users and network information, users can only obtain the information displayed on the network, and the effectiveness and efficiency of information acquisition are relatively low.

With the improvement of network speed and the increase of bandwidth, people gradually begin to interact with the Internet. In 2003, Dell Duherty, vice president of, O’Reilly Media, put forward the concept of “Web2.0”. Web2.0 is called read-write network.

In the early days of Web2.0, compared with all Internet users in Web1.0, they could create their own content and upload the network instead of just getting information from the network, which greatly increased the richness of network information. With the further development of AI, big data and other technologies in recent years, human-computer interaction has been promoted to a new stage. The online behavior data such as browsing information, clicks and searches generated by users on the Internet are captured and recorded, and the technical background can analyze more accurate user portraits through the user’s real-time data combined with their past information. and recommend the corresponding product or information for the user according to the user profile. In this way, it not only improves the purchase conversion efficiency of merchants, but also makes users find the goods they may want to buy more quickly, and make the user experience better.

However, while information centralization is convenient, it also has a big disadvantage, that is, all the data of users are collected and used by the platform without their perception, and even the ownership of the data is vague.

In the early days of Web1.0 and Web2.0, because of the small amount of user data and relatively few data dimensions, the user’s personal data could not produce much value. However, with the increase of people’s right to use the network in recent years, the value of network personal information can not be ignored. In the past two years, there have been cases of Internet companies invading personal privacy data and user data theft in various countries. In the future, with the development of artificial intelligence Internet of things ((AIoT)) and 5G network, the dimension of personal network data will be more comprehensive and valuable, which makes data security and data privacy become more important.

The emergence of Web3.0 is to solve the problems faced by Web2.0 at present. Because of the function and nature of de-trust, unusurable modification and confirmation of power, block chain network can well meet the needs of the underlying technology of Web3.0.

At the same time, because of the change of the network architecture, the data is no longer a simple number but a commodity with value attributes, and our existing data network is gradually transformed into a value network.

Panoramic Analysis of Web 3.0 Technical Module: which modules need to be revamped? Overview of current Web2.0 technical architecture; picture from CSDN.

The picture above shows the current technical architecture of web2.0. The core technical links can be divided into storage layer, development layer, service layer, network layer, user layer and business layer. In addition, it also needs the support of auxiliary platforms such as test platform, operation and maintenance platform, data platform and management platform.

Below I will explore the specific role of each component and the role it will play in Web3.0 in the future.

Supporting function platform.
Management platform.
The first is the management platform: the management platform mainly serves the development team and the company level, and its core responsibility is the authority management, whether it is the business system, middleware system or platform system.

The management platform has two main functions: the first is identification, which needs to determine the identity of the current operator to prevent illegal people from entering the account to operate; the second function is to delimit the operation authority of different operators to prevent unauthorized personnel from operating.

The management platform is an indispensable part of Web3.0 in the future. Although the current blockchain projects are based on the decentralized application (DApp), developed by the current public chain, in the future Web3.0 world, not all projects are completely open source and decentralized, and it is more likely that the current Internet enterprises will migrate. In this case, the management platform is still an impossible or missing technical component.

Even for decentralized projects, especially for products with relatively complex business logic and projects that interact directly with money, the need for a management platform cannot be ignored. The lack of a management platform can lead to product-level crashes and security problems-including attacks by external hackers and internal theft. Because the management platform is mainly for B-end users and is mainly used for internal management, so there is no decentralized transformation of the graduation, can continue the current product design.

Data platform.
Data platform is a very important part of the current Internet technology architecture. At present, the data platform is mainly used in three dimensions: data management, data analysis and data application.

The first dimension is data management: data management includes data collection, data storage and data access.

The first is data acquisition. At present, the collected data includes log, user behavior, business data and other information. Because Web3.0 is a decentralized network architecture, and the user’s data only belongs to the user, so the user data collection mechanism needs to be reformed. There is no doubt that the data collection work still needs the platform to provide, because users do not have the ability to obtain and collect personal data, but how to ensure that the platform does not do evil is an important direction that needs to be studied.

Data storage, which was fully analyzed in the previous article. Decentralized storage is indispensable, otherwise the security of user data can not be guaranteed. Data access service is mainly responsible for providing protocols for reading and writing data, this part is mainly that the protocol level is not directly related to the decentralization of the network, can be retained completely and does not need special technical improvement.

Finally, there is data security, because the data becomes decentralized storage, because the main responsibility for data security becomes the organization that provides decentralized storage. at the same time, how to generate the data to be stored in the chain is also a very important part and a weak part of security, which needs to be studied.

The second dimension is data analysis. The dimension of data analysis includes two aspects: data statistics and data mining. These two aspects are based on the existing data for further analysis, and are not directly related to the decentralization of the platform and the ownership of the data, so there is no need for special technical improvements.

The third dimension is data application. The data application level mainly depends on a large number of data sources and strong data analysis capabilities, and can only be applied through data analysis with a certain probability. In the era of Web3.0, although the ownership of data is back in the hands of users, users can still authorize the platform to use all kinds of data in order to have a better product experience, so the richness of data will not be affected too much, and there will not be too much hindrance in data application.

Operation and maintenance platform.
The main purpose of the operation and maintenance platform is to ensure the normal operation of the platform and applications, and the four core responsibilities are configuration, deployment, monitoring and emergency response.

Configuration is mainly responsible for resource management, such as IP address management, virtual connection management and so on. Deployment is mainly responsible for releasing the system online, including release management and rollback. Monitoring is mainly responsible for collecting and monitoring the relevant data after the operation of the system. The emergency is mainly responsible for dealing with the failure of the system, such as offline fault machine, switching IP and so on.

The main responsibility of the operation and maintenance platform is to ensure the normal operation of the platform, which does not have much overlap with data and decentralization, so there is no need to make major changes. However, because there may be a certain intersection between the operation of the platform and the block chain technology in the future, it is necessary to manage and monitor the information on the chain and related operation, and this part needs to be further emphasized and developed.

Test platform.
The test platform is mainly used for testing various functions on the daily platform, which is mainly divided into four aspects: use case management, resource management, task management and data management. Compared with the three functional platforms described above, the test platform is more independent and does not have much interaction with the real business scenario, so when it comes to the web3.0 stage in the future, the test platform does not need to be upgraded or improved on a large scale.

Core layering.
In the whole Web architecture, the core layer can be divided into six layers, namely storage layer, development layer, service layer, network layer, user layer and business layer.

Among them, the storage layer, development layer, service layer and network layer are mainly the back-end technology layer, which are mainly used to support the normal operation of application software, application platform and so on. On the other hand, the business layer and the user layer are mainly the front display layer, which mainly show information to users and interact with each other.

I will analyze these six layers in turn:

Business layer.
The business layer is relatively flexible, mainly designed according to different applications and the specific business logic of the platform. Therefore, whether the network is decentralized or not will not have a great impact on the business layer, and there is no need for large-scale improvement at the business level.

User layer.
The user layer is relatively complex, which includes three parts: user management, information push and user information storage. Among them, the two sections of user management and information push do not need to be greatly improved, because no matter whether the network is decentralized or not, it is necessary to have the same user login system (single sign-on or authorized login) and information push system (the core is to identify login account and message push). Unlike the previous two sections, the user information storage section will become relatively complex.

The storage of user information is divided into two parts: the first part is the information uploaded by the user, such as the pictures and information uploaded by the user on Weibo and the information uploaded on Wechat, which undoubtedly belong to the user. but these must be stored on the platform or application side, first, because most users do not have the ability to independently set up personal storage schemes (decentralized or centralized). In addition, because the user’s base is very large, and the upload behavior is more frequent. If each user uses his own storage scheme, the user experience of the application or platform will be greatly affected, taking into account the delay of the connection and the delay of the call. However, if all are stored on the platform side or the application side, it is difficult to ensure the privacy of the data, because the back-end technology can not be shown to the user intuitively, even if the platform or application secretly accesses the storage or even uses the user’s information is also difficult to detect. Therefore, if we really want to achieve Web3.0, in addition to solving the basic technical problem of decentralized storage, we also need to study how to ensure the security and privacy of data before being stored.

The second part of user information refers to the behavior data of users on the site, such as what products they click to browse, or what information they query, and so on. This kind of data is more difficult to manage than the data uploaded by users, because all the behaviors take place in the platform or application, and users have weaker control over the data, even the right to know. At present, users can not know their own behavior data in the application, so in fact, the ownership and use rights are on the side of the platform. If you want the platform or application to return the data to the user, you may need to rely more on the formulation of laws and regulations, otherwise the platform and application will not give up important assets such as user information. In terms of technology, this kind of data faces the same problem as personal uploading data. How to ensure that the data is stored in decentralized storage after the data is generated, and how to ensure that developers do not do evil before it is stored is the biggest problem.

Storage layer.
The storage layer can be simply divided into two parts. The first part is the database, including SQL (relational database) and NoSQL (non-relational database). The data is mainly used to manage the data, and the data can be added, queried, updated, deleted and other operations. Database for the role of the operating system in storage in the computer, even if the future realization of decentralized storage, database technology is also indispensable, because without the database, the efficiency of data storage, collaborative work and information reading will be greatly affected. But at present, there is no great need for decentralization of the database relative to storage.

The second part of the storage layer is the decentralized storage technology highlighted in the previous article, and its centralized storage is an indispensable technical part of the implementation of Web3.0. Only by realizing the decentralized storage, can we better ensure the security and privacy of the data. For more information, you can refer to the previous article, which will not be overstated here.

Development layer.
The development layer is similar to the test platform mentioned above, which is purely functional components in the whole architecture, mainly to provide support for the development of the platform or application.

In the development layer, it mainly includes three specific technical links: development framework, server and container. The development framework is mainly used for the skeleton of development, and there are different development frameworks for different development languages. The server here mainly refers to the development of the server at the software level, connecting it with the business layer and the user layer, and playing a supporting role to the platform or application. Finally, there is the development container, which mainly manages the objects after development (including life cycle, dependencies, etc.).

The composition of the developed technology is relatively independent of the network structure, and there is no need to redesign the technical architecture to adapt to the Web3.0.

Service layer.
The service layer is mainly used to coordinate the cooperation problems of different systems within the same architecture. The three main functional modules in the service layer are configuration center, service center and information queue.

The configuration center is mainly used for unified provisioning of all servers to support each business module, as well as rapid deployment after a failure to avoid affecting the actual operation. The service center, mainly to solve the problem of cross-system configuration and scheduling, identifies and deploys the server through the service name system and the service bus system. The final queue information is mainly for the realization of asynchronous notification across systems. The role of the server module is similar to that of the development layer, which is mainly used to support the development of platforms or applications, and is relatively independent of the specific underlying network structure. Because if the future development of Web3.0 can directly accept the current service layer.

Network layer.
Network layer and storage layer are similar, compared with Web2.0 in Web3.0, the architecture will change greatly, mainly from centralized architecture to decentralized architecture. In the traditional architecture, network mainly includes three basic modules: load balancing, CND, computer room and center.

In load balancing, it includes DNS load, hardware load and software load. The main purpose of load balancing is to balance the load of the computing unit. in addition, load balancing needs to be considered based on load, performance (throughput, response time) and business. The main service object of load balancing is mainly focused on the platform or the application itself, but in fact, the dependent DNS (Domain Name System) in DNS load is an important component of Web, and it is also a key direction to explore in the decentralized network.

The current domain name system is a centralized system, the domain name system can be a simple three-tier atmosphere, the top layer is the ICANN (The Internet Corporation for Assigned Names and Numbers, Internet name and number address allocation Agency) controls everything and occupies a central position. The second layer is the domain name registry, such as Verisign, which controls the top-level domain (TLDs), such as .com. At the bottom are domain name registration companies, which provide retail services for domain name registration directly to customers.

At the network level, the absolute control of ICANN means that all domain names and websites linked to domain names will be censored by ICANN, and the information may be seized and tampered with, which greatly reduces the authenticity and freedom of the information. Although we can not realize the confirmation and protection of the ownership of user information mentioned at the beginning of the article through the decentralization of DNS, but if we achieve the complete decentralization of DNS, it can be a good time network decentralization, but also greatly improve the degree of freedom of network information. However, at present, it seems that the decentralization of DNS is more difficult, and the current projects such as Handshake still have to rely on the centralization system of ICANN, mainly in the two aspects of TLDs and CATs (Certificate Authority). On the whole, whether DNS is decentralized or not will not have a substantial impact on Web3.0, but more icing on the cake.

The second module is the CDN (Content Delivery Network content distribution network. CDN is an important part of the network now. CDN relies on the edge servers deployed everywhere, through the load balancing, content distribution, scheduling and other functional modules of the central platform, so that users can get the content they need nearby, reduce network congestion, and improve the response speed and hit rate of user access.

The key technologies of CDN mainly include content storage and distribution technology, and the content storage part needs to be de-centralized and reformed. In CDN, the edge needs to cache the distributed content before redistributing to users, and the caching process is also the highest risk to the security and privacy of data, which may be faced with the occurrence of data theft by CDN providers, and may be attacked by external hackers through ddos and other methods. For large Canadian companies, internal CDN networks are generally built to protect their business data. Because of the cost, small businesses mainly rely on professional CDN providers. The overall technical service object of CDN is for ToB enterprises, and has little correlation with individual user data. The decentralization of communication technology in CDN will not bring much improvement to the security and privacy of individual user data.

The last module of the network layer is the computer room and the center module, these two modules are mainly hardware-based, through the computer room and the center should be to support the operation of the platform and applications.

Web3.0 summary.
At present, the development path of Web3.0 is still relatively unclear. Web3 Foundation is the team that tried Web3.0 relatively early, and one of the missions, including Polkadot and other projects, is also to act as Web3.. 0. 0, but I think the feasibility of completely abandoning the current Web2.0 technical framework and re-establishing a completely new framework is relatively low.

First of all, there is no doubt that blockchain technology is still immature. At present, it is generally recognized that the best public chain, Etay Fang, does not show the possibility of being a “world computer” at this stage, and its problems are mainly focused on its own underlying technical logic. there is no doubt that after the low-level technical problems are solved, there is no doubt that they will face the problems at the application level.

The characteristics of blockchain technology itself determine that it is difficult to support the number of Internet-level transactions and the number of users. In the consideration of the public chain, the main link of the current evaluation is still on TPS, but if you want to replace the current Internet architecture, even if TPS reaches the demand, it is still not enough. Because of giving priority to decentralization and security, the bottom layer of blockchain also increases its restrictions and requirements for upper-layer applications. These limitations and requirements make it impossible for applications to design more complex business logic. Under this premise, I think it is very difficult to completely migrate many applications of traditional Web2.0.

But even if migration is feasible, I believe that few traditional projects will be migrated.

First, because the cost of migration is very high, the underlying logic of the blockchain is quite different from that of the current Internet. Migration means redevelopment, trial and error, all of which require a high cost of time and money. However, while paying the cost, it is unable to get more users or other values in return. On the contrary, it will lose very valuable user personal information. Therefore, there is no reason for the current app to migrate.

At the same time, for most of the App, such as the longest-used takeout applications, e-commerce applications and map applications, there is no practical value and significance to build it on the block chain.

In addition, through the above analysis, it can be concluded that the main technical differences between Web3.0 and Web2.0 will focus on decentralized storage and decentralized computing, while other technical modules do not need to change much. Therefore, it will be a better choice to maintain the current Web2.0 technology and upgrade and apply it in decentralized storage and decentralized computing.

Finally, on the whole, because of the immaturity of decentralized storage and decentralized computing technology, Web3.0 takes a longer time to implement.

At the same time, the development of Web2.0 to Web3.0 seems to need the promotion of relevant policies. At present, Chinese domestic Internet users do not have much awareness and requirements for the ownership and use of personal information, so it is difficult to promote the development of Web2.0 to Web3.0 from the perspective of users. If there is no policy promotion, the current architecture in the traditional Internet applications are not motivated to take the initiative to change, then the promotion of Web3.0 will be greatly hindered.

Combining all the information and problems mentioned above, Web3.0 will be a good development direction in the future, but it is still in a very early stage, and it will take a longer time to develop and evolve in the future.