Windows.  Viruses.  Notebooks.  Internet.  office.  Utilities.  Drivers

(Center for Applied Research of Computer Networks)

TSPIKS is a research project to create technologies and products for computer networks new generation in Russia. We develop and implement the latest and promising technologies in the field of computer networks and the Internet, we demonstrate and test the effectiveness of these technologies in the tasks of industry and business. Resident of the IT cluster of the Skolkovo Innovation Foundation.

Trends in the development of computer networks and the Internet

The material was prepared specifically for the magazineSkolkovo Review

Today it is impossible to imagine our life without the Internet and information technology. They have firmly entered our lives, greatly simplifying it. With the development of information technology, new tools become available to us that make the processes we are used to faster, more convenient, and cheaper. However, the changes we are seeing now are just the tip of the iceberg. Networking is just at the beginning of its growth journey, and the really big innovations are ahead of us. So, what evolution for the coming decades can be predicted already today, seeing in what direction the development of computer networks and the Internet is going?
1. The reach of the audience will grow, the Internet will appear in the most remote places on the planet.
By the end of 2012, the number of Internet users worldwide reached 2.4 billion users worldwide. By 2020, according to the forecasts of the US National Science Foundation, the number of Internet users will increase to 5 billion. The Internet will become more geographically distributed. The largest user growth over the next 10 years will come from developing countries in Africa (now using no more than 7%), Asia (about 19%) and the Middle East (about 28%). By comparison, over 72% of North Americans currently use the Internet. This trend means that by 2020 the Internet will not only reach remote places around the world, but will support many more languages ​​and not only the ASCII encoding system we are used to. Russian Internet users, according to the Ministry of Communications of the Russian Federation, at the beginning of 2012 were 70 million people. According to this indicator, Russia came out on top in Europe and sixth in the world. According to the results of a study by the RBC.research agency, the level of Internet penetration in Russia in 2018 will exceed 80%.
2. In information technology an era begins software.
We are now in a stage of intellectualization of "iron", when the software becomes more important than the hardware itself. The software industry will grow at a rapid pace: in 2010. the annual growth rate of software was at least 6%, in 2015 the market volume will reach $365 billion, a quarter of which falls on the business application market. The hardware market will shrink: the market size in 2013 was $608 billion, the growth rate from 2008 to 2013 is negative -0.7%. Until 2018, growth of 2.1% is forecasted, mainly due to the growth of the PC market (it will grow by 7.5%) and peripheral devices (printers, scanners, etc.). XXI century is the century wireless technologies. In 2009 alone, the number of mobile broadband subscribers (3G, WiMAX and other high-speed data transmission technologies) increased by 85%. By 2014, it is predicted that 2.5 billion people worldwide will use mobile broadband.
3. Increasing data transfer speed and bandwidth.
To date, the data transfer rate in good computers- 40 Gbps For example, 4 volumes of the novel "War and Peace" by L. Tolstoy are about 40 Mbps, i.e. 1000 times smaller! These 4 volumes can be transferred in less than 1 microsecond. But, in the near future it will be possible to transfer data at the speed of light. Already today there is WiGik technology, which allows you to transfer information at a speed of 7 Gbit / s over a distance of several kilometers. information encoding method physical level. The same goes for bandwidth. According to Cisco, today Skype has over 35 million concurrent users, Facebook has over 200 million, and 72 hours of video is uploaded to YouTube every minute. Experts predict that by 2015 the number of devices on the network will be twice as high as the world's population. By 2014, about 80% of this traffic will be video traffic. Images and video files that are constantly exchanged on the World Wide Web require a higher bandwidth. And technologies will develop in this direction. Users will communicate and share information through video and voice in real time. More and more network applications are emerging that require role-time interaction.
4. Semantic WEB.
We are rightfully moving towards a "semantic web" in which information is given a precisely defined meaning, allowing computers to "understand" and process it at a semantic level. Today, computers work at the syntactic level, at the level of signs, they read and process information according to external signs. The term Semantic Web was first coined by Sir Tim Berners-Lee (one of the inventors of the World Wide Web) in Scientific American. The semantic WEB will allow you to find information by searching: "Find information about animals that use sound location, but are neither a bat nor a dolphin," for example.
5. New transfer objects.
Thanks to the development of new technologies, it will be possible to transmit through computer networks what previously seemed impossible. For example, smell. The machine analyzes the molecular composition of the air at one point and transmits this data over the network. At another point in the network, this molecular composition, i.e. smell is synthesized. A prototype of such a device has already been released by the American company Mint Foundry, it is called Olly, until it went on sale. However, soon we will be able to see the embodiment of these possibilities in everyday life.
6. The Internet will become a network of things, not just computers. Today, there are over 700 million computers on the Internet (according to CIA World Factbook 2012). Every year, the number of devices that go online increases for the user: computers, phones, tablets, etc. Already today, the number of IP addresses exceeds the population of the Earth (IP addresses are needed for the operation of household appliances). With the new architecture of computer networks, the era of the "Internet of things" will come. Things and objects will interact through networks, this will open up great opportunities for all spheres of human life. One of the nearest developments is "smart dust" - sensors scattered over a large area that collect information. The US National Science Foundation predicts that nearly a billion sensors on buildings, bridges, and roads will be connected to the Internet for purposes such as monitoring electricity use, security, and so on. In general, it is expected that by 2020 the number of Internet-connected sensors will be an order of magnitude greater than the number of users. In continuation of this thought, one can cite the reflections of Vinton Gray Cerf (an American mathematician, considered one of the inventors of the TCP / IP protocol, vice president Google): “Suppose that all the products that you put in the refrigerator are equipped with a special barcode or microchip so that the refrigerator records everything that you put in it. In this case, while at the university or at work, you can view this information from your phone, watch different variants recipes, and the refrigerator would suggest you what to cook today. If we expand this idea, we get approximately the following picture. You go to the store, and while you are there, your mobile phone rings - this is the refrigerator ringing, which advises what exactly is worth buying. The Smart Internet will transform social networks (as we have them today) into social media systems. Cameras will be installed in the premises various sensors. Through your own account, you can feed pets and run washing machine, For example.
7. Robotization of society.
Already today we know examples of unmanned aerial vehicles, automatic vacuum cleaners, police robots “work” in Japan - all these technologies perform their functions without human intervention. And every year the penetration of such machines will only increase. One of the unsolvable problems in computing technologies is the problem of recreating thinking by a computer. However, it is possible to connect the human brain with the cybernetic, computer system. Consider the movie Robocop. Already today there are similar experiments, when a prosthetic leg or arm of a person is attached to the spinal cord. Let us recall the example of the South African runner Oscar Pistorius, who has been deprived of both legs since childhood, but at competitions overtaking absolutely healthy competitors, thanks to carbon prostheses. According to experts, the first such "superman", cyber organism will appear before 2030. He will f physically perfect, resistant to disease, radiation and extreme temperatures. And yet it will have a human brain.
8. The new status of a person on the Internet.
The Internet is changing the life of a person. " The World Wide Web» becomes not only a platform for obtaining information and communication, but also a tool for implementing household needs: such as making purchases, paying utility bills, etc. The Internet has changed the relationship of a person with the state. Personal communication, personal appeals to special services will be minimized. Submit documents to the university, call an ambulance, write a statement to the police, issue a passport - all this can already be done electronically today. The state will continue to be forced to generate services via the Internet. Already today, electronic document management throughout the country is the most important priority of the Ministry of Telecom and Mass Communications of the Russian Federation. It is necessary to talk about the new status of a person in the world of Internet technologies. Access to the network will become a civil right of every person, will be sacredly protected and controlled by law along with other civil liberties. This is the near future. Thus, the concept of democracy in society is changing. For the will of citizens, special platforms, tribunes, and the media are no longer needed. In this regard, there will be a minimum of anonymity. The luxury of changing passwords and creating accounts under non-existent names, leaving caustic comments under an invisibility hat - most likely will not. The login / password for entering the network can become a means of identifying a person, and his real passport data will be tied to it. Moreover, most likely it will not be planted "from above", as an attempt at censorship and control. And the desire of society itself, the need "from below". Because the more life on the Internet is real, the more transparency its users will want. A person's reputation in life will determine his reputation in the global network, there will be no invented biographies. Having determined the data of a person, the network itself will create filters and passes to access information on age restrictions, to private information, to various services in accordance with solvency and even social reliability.
9. Changes in the labor market and education.
Active penetration network technologies and the Internet will lead to changes in the labor market and in education. The Internet has already become a global and key communication tool; it is increasingly transforming from a platform of entertainment into a platform of work. Social media, Email Skype, informational resources, corporate sites and programs built into the computer bind people not so much to a specific office as to the computer itself. And here it doesn’t matter where you use it from: from work, from home, from a cafe or from the coast of the Indian Ocean. There will be more and more employees doing their work remotely. And there will be more and more offices in the "pocket", i.e. virtual enterprises that exist only on the Internet. People who receive education remotely through new formats provided by the Internet - too. For example, today at Stanford University, 25,000 people listen to a lecture by two professors at the same time!
10. The Internet will become greener.
Networking technology consumes too much energy, the volume of it is growing, and experts agree that the future architecture of computer networks should be more energy efficient. According to the Lawrence National Laboratory at the University of Berkeley, the amount of energy consumed by the global network doubled (!) between 2000 and 2006 (!). The Internet occupies 2% of the world's electricity consumption, which is equivalent to the capacity of 30 nuclear power plants - 30 billion watts. The trend towards "greening" or "greening" the Internet will accelerate as energy prices rise.
11. Cyber ​​weapons and cyber wars.
The development of Internet technologies and the capabilities of computer networks has another side to the coin. Ranging from cybercrimes associated with the increase in e-commerce on the Internet, to cyberwars. Cyberspace is already officially recognized as the fifth "battlefield" (same as land, sea, airspace and space). The US Navy even created the CYBERFOR cyber troops in 2010, which are directly subordinate to the command of the US Navy. Today, not only PCs of ordinary users, but also industrial systems that control automated production processes fall under virus attacks by hackers. The malicious worm can be used as espionage, as well as sabotage of power plants, airports and other life-supporting enterprises. For example, in 2010, the Stuxnet computer worm hit Iran's nuclear facilities, setting that country's nuclear program back two years. Application malware It turned out to be comparable in effectiveness to a full-fledged military operation, but in the absence of casualties among people. The uniqueness of this program was that for the first time in the history of cyberattacks, the virus physically destroyed the infrastructure. Most recently, on March 27 of this year, the largest hacker attack in history took place, which even reduced the data transfer speed of the entire Internet. The target of the attack was Spamhaus, a European anti-spam company. The power of DDoS attacks was 300 Gb/s, despite the fact that the power of 50 Gb/s is enough to disable the infrastructure of a large financial organization. The problem of national security is one of the most important issues on the agenda in developed countries. The current architecture of computer networks cannot provide such security. Therefore, the antivirus / web protection industry and the development of new security technologies will grow every year.
12. The release of the Internet and network technologies into space.
Today, the Internet is on a planetary scale. On the agenda are interplanetary space, the outer space Internet.

The International Space Station is connected to the Internet, which significantly speeds up the work and interaction of the station with the Earth. But the usual establishment of communication using fiber optic or simple cable, which is very effective in terrestrial conditions, is not possible in space. In particular, due to the fact that it is impossible to use the usual TCP / IP protocol in interplanetary space (the protocol is a special "language" of computer networks for "communicating" with each other).

Research work to create a new protocol, thanks to which the Internet could function both on lunar stations and on Mars, is underway. So, one of these protocols is called Disruption Tolerant Networking (DTN). Computer networks with this protocol have already been used to connect the ISS with the Earth, in particular, photographs of salts were sent via communication channels, which were obtained in a state of weightlessness. But experiments in this area continue.

The Internet for more than two decades of its development has practically not changed conceptually and architecturally. On the one hand, new data transmission technologies were introduced, on the other hand, new services were created, but the basic concept of the network, the architecture of computer networks remain at the level of the 80s of the last century. Change is not only long overdue, but vital. Because no innovation is possible on the basis of the old architecture. Computer networks are already operating at the limit of their capabilities today, and they may simply not be able to withstand the load that networks will experience with such active growth. The development and implementation of all these trends is possible only after the introduction of a new, more flexible architecture of computer networks. In the entire scientific IT world, this is the #1 question.

The most promising technology/architecture of computer networks today, which is capable of leading out of the crisis, is software-defined networking technology (softwaredefinednetwork). In 2007, Stanford and Berkeley University staff developed a new “language” for communicating computer networks - openflo protocolw and a new algorithm for the operation of computer networks - PCS technology . Its main value is that it allows you to get away from "manual" network management. In modern networks, the functions of control and data transmission are combined, which makes control and management very difficult. The PCS architecture separates the control process and the data transmission process. This opens up tremendous opportunities for the development of Internet technologies, since the PCS does not limit us in anything, bringing software to the fore. In Russia, the Center for Applied Research of Computer Networks is engaged in the study of PCN.

Internet technologies of the future. Top 3 most unusual ways of transmitting information

Where will scientific progress move, what will happen to the global telecommunications market in the future, what technologies will become available to ordinary Internet users, how much can Internet access speed increase in the next 5-10 years? We will try to answer these and other questions about the Internet technologies of the future. We present you our rating of the top 3 most unusual ways of transmitting information. To date, these are experimental developments, but in a few years they can tightly enter our daily lives.

3. On the third place the world's fastest wireless data transmission technology - using light vortices . It was invented and first used in 2011-2012. scientists from the University of Southern California, Tel Aviv University and the NASA Jet Propulsion Laboratory. This technology allows you to accelerate the wireless transmission of information up to 2.5 Tbps (approximately 320 GB/s).

The essence of technology: electromagnetic waves act as a data transmission channel, which twist into vortices of a strictly defined shape. At the same time, within one wave, there can be any number of information flows. Thus, it is possible to transfer huge amounts of data at ultra-high speeds. Such “light vortices” use the angular orbital momentum (Orbital Angular Momentum, OAM), which is an order of magnitude more serious and more technologically advanced than the angular spin momentum (spin angular momentum, SAM) used in modern data transmission protocols for Wi-Fi and LTE networks. Scientists in the process of testing the technology used a single light beam, consisting of 8 separate beams with different values ​​of the OAM moment.

Application: so far, this technology cannot be used in building wireless networks, but it is great for fiber optic networks. The latter are just approaching their physical limitations - there is simply nowhere to significantly increase the speed and volume of data transfer - therefore, the technology of light vortices can become a new step in the development of a fiber-optic Internet connection.

Flaws: this technology is still at the initial stage of development, therefore it is possible to transmit data through light vortices only over a very short distance. Scientists were able to stably transmit information only at a distance of 1 meter.

2. Won the second position the most powerful wireless technology in the world - neutrino beams can be used to transmit a signal through any objects. Neutrino particles can pass through any obstacles without interacting with the material. So, scientists from the University of Rochester managed to transmit a message through a 240-meter stone block, which none of the currently available wireless technologies can. If neutrino beams begin to be used in practice, then the signal will not need to go around the Earth, but can simply pass through it. This would greatly simplify the Internet connection between the continents and other distant points from each other.

The essence of technology: data is transmitted wirelessly using neutrino beams. At the same time, the particles Neutrinos are accelerated to the speed of light (or something like that), and they pass through any material without interacting with it.

Application: in the future, if the technology develops, neutrino beams can be used to transmit information over extremely long distances and to hard-to-reach places. Today, all wireless technologies require line of sight between transmitter and receiver, and this is not always possible. That is why neutrino technology is so interesting and useful for the telecom market.

Flaws: at the moment, equipment for transmitting data through neutrino beams is very expensive and bulky (but we said the same about mobile phones and computers 10-15 years ago). This information transfer technology requires a powerful particle accelerator, of which there are only a few in the world. Scientists who study data transmission via neutrino beams use the Fermilab particle accelerator (4 km in diameter) and the MINERvA particle detector (5 tons).

1. The leader in the ranking was RedTacton technology , which uses the most biological data transmission channel is the human skin . Have you ever watched a movie about spies with their high-tech stuff and also wanted to get information on your phone with one touch of your hand, exchange electronic business cards and any other data with a handshake, or print documents by simply swiping your hand over the printer? All this and much more can become a reality if RedTacton technology is developed.

The essence of technology: the technology is based on the fact that each person has an electromagnetic field, and his skin can act as a signal transmission channel between several electronic devices. The technology is based on the use of electro-optical crystals, the properties of which change under the influence of a human electromagnetic field. And already with the help of a laser, the changes are read from the crystals and converted into a digestible format.

Moreover, the RedTacton system can work not only under normal conditions, but also under water, in vacuum, in space.

Application: today we often have to use different cables, adapters and so on. in order, for example, to connect a phone to a laptop or a printer to a PC. If RedTacton technology develops, then soon all these wires will become unnecessary. It will be enough to take one gadget in one hand, and touch the second device with the other hand. And the connection between them will occur through our skin. Already today, most smartphones are equipped with screens that are powered by electromagnetic pulses at our fingertips.

And these are only the first steps in the popularization of this technology. It can be used in medicine (all your medical data can be recorded on a special chip that will warn the doctor about allergies and intolerance to a particular drug after touching you), the military (you can make a weapon that will react only to the owner’s hands), and your children they will never be able to harm themselves if they find your pistol or hunting rifle at home), in everyday life (the keys to the front door are no longer needed, you can just touch the lock and it will work from an electromagnetic pulse), in production (sensors can be installed in factories, which warn you of dangerous areas and breakdowns, you can quickly fix the problem by simply touching the device) and much more. others

Flaws: the technology has not yet been studied enough to say for sure that it is absolutely harmless to the human body. It will be possible to introduce RedTacton to the masses only after a lot of experiments and studies have been carried out. People with hypersensitivity and certain medical problems (especially those with heart disease) may be at risk in the first place. In addition, ubiquitous hackers will eventually find a way to steal people's data or run computer viruses touching them in transport or on the street. But the main problem with this technology may be the psychology of people - many today are afraid of computers, Wi-Fi networks and microwave ovens, but can you imagine what will happen to them if their own body becomes a transmitter of information?

Science and technology are moving forward. And Internet technologies are developing almost faster than all the others. Every year, scientists invent new ways to exchange information, communicate at a distance, collect, store and transmit various data. It will take another ten years, and we will use every day those devices and opportunities that today we can only dream of. And our ranking of the top 3 most unusual ways to transmit information may have slightly opened the veil of the future for you.

Recently, American investor Mike Maples spoke about networking as the business of the future, according to Fortune. Maples started investing over 10 years ago. Prior to that, he was a private entrepreneur, so investing was a new challenge for him.

Already at that time, he realized that the future belongs to network technologies, and not to companies in their usual sense. That is why the first investments were made in the newly born projects of Twitter and Twitch. A little later, together with partner AnnMiura-Ko, Lyft, Okta and many other projects were implemented.

To date, Mike Maples is convinced of the following:

– Software-based networks will be the most expensive business and will eventually displace traditional companies

– Networks can significantly improve the well-being of the population in all regions of the world

– Grid companies will face tough resistance from governments and traditional companies

To confirm his words, Maples turns to history. He says that the creation of the steam engine and railway Simultaneously with the advent of the stock market, it allowed businesses to take a step forward, which, in turn, led to a jump in the well-being of the population. Between 1800 and 2000, Maples argues, the real income of the population increased by an average of 14 times, which has never happened before in such a relatively short period of history.

Previously, large corporations had significant advantages due to the volume of production and a significant division of labor. Today, however, even the largest traditional corporations are losing out to networks, since the latter have a huge number of users who themselves create the so-called network effects, including the instantaneous promotion of various ideas, opinions, goods and services.

You don't have to look far for examples. Uber and Lyft are leaders in the US private transportation market; Airbnb is the leading property rental service and Apple company 10 years ago, she turned the idea of ​​a mobile phone upside down.

Now we can all already observe the intensifying struggle between traditional corporate systems with network. Uber and Airbnb are under pressure from local authorities over taxes and supposedly using "uncompetitive" methods of competition. Maples believes that the development of network technologies should eventually lead to the prosperity of people, although in the intermediate stages of formation, certain industries react to progress with job cuts.

Electronics underlies almost all communication. It all started with the invention of the telegraph in 1845, followed by the telephone in 1876. Communication has been constantly improved, and the progress in electronics, which has occurred quite recently, has laid a new stage in the development of communications. Wireless came out today new level and confidently occupied the dominant part of the communications market. And new growth is expected in the wireless communications sector thanks to the evolving cellular infrastructure, as well as modern technologies such as . In this article, we will consider the most promising technologies for the near future.

4G state

4G in English means Long Term Evolution (LTE). LTE is an OFDM technology that is the dominant structure of the cellular communication system today. 2G and 3G systems still exist, although the introduction of 4G began in 2011 - 2012 "Today, LTE is mainly implemented by major carriers in the US, Asia, and Europe. Its rollout is not yet complete. LTE has gained immense popularity among smartphone owners as high data rates have opened up opportunities such as video streaming for efficient movie viewing. However, However, everything is not so perfect.

Although LTE promised download speeds of up to 100 Mbps, this was not achieved in practice. Speeds up to 40 or 50 Mbps can be achieved, but only under special conditions. With a minimum number of connections and minimal traffic, such speeds can very rarely be achieved. The most likely data rates are in the ranges of 10 – 15 Mbps. During peak hours, the speed sags to a few Mbps. Of course, this does not make the implementation of 4G a failure, it means that so far its potential has not been fully realized.

One of the reasons why 4G does not provide the declared speed is that there are too many consumers. If it is used too intensively, the data transfer speed is significantly reduced.

However, there is hope that this can be corrected. Most carriers providing 4G services have yet to implement LTE-Advanced, an enhancement that promises to improve data transfer speeds. LTE-Advanced uses carrier aggregation (CA) to increase speed. “Carrier bundling” refers to combining standard LTE bandwidth up to 20 MHz into 40 MHz, 80 MHz or 100 MHz portions to increase throughput. LTE-Advanced also has an 8 x 8 MIMO configuration. Support for this feature opens up the potential to increase data rates up to 1 Gbps.

LTE-CA is also known as LTE-Advanced Pro or 4.5G LTE. These combinations of technologies are defined by the 3GPP standards development group in version 13. It includes carrier aggregation as well as licensed assisted access (LAA), a technique that uses LTE in the unlicensed 5GHz Wi-Fi spectrum. It also deploys LTE-Wi-Fi (LWA) link aggregation and dual connectivity, allowing the smartphone to "talk" to both a small access point node and a WiFi access. There are too many details in this implementation that we won't go into, but the overall goal is to extend the life of LTE by lowering latency and increasing data rates to 1Gbps.

But that's not all. LTE will be able to deliver higher performance as carriers begin to simplify their strategy with small cells, enabling higher data rates for more subscribers. Small cells are just miniature cells base stations, which can be set anywhere to fill gaps in macrocell coverage, adding performance where needed.

Another way to improve performance is Wi-Fi usage. This method ensures fast downloads to the nearest Wi-Fi hotspot when available. Only a few carriers have made it available, but most are looking at an enhancement to LTE called LTE-U (U for unlicensed). This is a similar method to LAA that uses the unlicensed 5GHz band for fast downloads when the network cannot handle the load. This creates a spectrum conflict with the latter, which uses the 5 GHz band. Certain trade-offs have been devised to implement this.

As we can see, the potential of 4G is still not fully revealed. All or most of these improvements will be implemented in the coming years. It is worth noting that smartphone manufacturers will also make hardware or software changes to improve LTE performance. These improvements are likely to occur when the mass adoption of the 5G standard begins.

Discovery of 5G

There is no such thing as 5G yet. So, the loud statement about “a completely new standard that can change the approach to wireless transmission information" is too early. Although, some Internet service providers are already arguing over who will be the first to implement the 5G standard. But it is worth remembering the dispute of recent years about 4G. After all, there is no real 4G (LTE-A) yet. However, work on 5G is in full swing.

The 3rd Generation Partnership Project (3GPP) is working on the 5G standard, which is expected to be rolled out in the coming years. The International Telecommunication Union (ITU), which will "bless" and administer the standard, says 5G should be finally available by 2020. However, some early versions 5G standards will still appear in the competition of providers. Some 5G requirements will appear as early as 2017-2018 in one form or another. Full implementation of 5G will not be an easy task. Such a system would be one of the most complex, if not the most complex, of wireless networks. Its full deployment is expected by 2022.

The rationale behind 5G is to overcome the limitations of 4G and add opportunities for new applications. The limitations of 4G are mainly subscriber bandwidth and limited data rates. Cellular networks have already moved from voice technology to data centers, but further performance improvements are needed in the future.

Moreover, a boom in new applications is expected. These include HD 4K video, virtual reality, Internet of things (IoT), as well as the use of the machine-to-machine structure (M2M). Many still predict between 20 and 50 billion devices online, many of which will connect to the internet via cellular. While most IoT and M2M devices operate at low data rates, streaming data (video) requires high internet speeds. Other potential applications that will use the 5G standard are smart cities and communications for road transport safety.

5G is likely to be more revolutionary than evolutionary. This will involve the creation of a new network architecture that will overlay the 4G network. The new network will use distributed small cells with fiber or mmWave reverse channel, and will also be economical, non-volatile and easily scalable. In addition, 5G networks will have more software than hardware. Will also be used program network(SDN), network function virtualization (NFV), ad hoc networking (SON) methods.

There are also a few other key features:

  • The use of millimeter waves. The first versions of 5G may use the 3.5 GHz and 5 GHz bands. Frequency options from 14 GHz to 79 GHz are also being considered. The final version has not yet been selected, but the FCC says that the choice will be made in the near future. Testing is carried out at frequencies of 24, 28, 37 and 73 GHz.
  • New modulation schemes are considered. Most of them are some variant of OFDM. Two or more schemas may be defined in the standard for different applications.
  • Multiple Input Multiple Output (MIMO) will be included in some form for extended range, data rate, and link reliability.
  • The antennas will be phased arrays with adaptive beamforming and steering.
  • Lower latency is the main goal. Less than 5ms is specified, but less than 1ms is the goal.
  • Data rates from 1Gbps to 10Gbps are expected in 500MHz or 1GHz bandwidths.
  • Chips will be made from gallium arsenide, silicon germanium and some CMOS.

One of the biggest challenges in 5G adoption is expected to be the standard's integration into Cell phones. IN modern smartphones and so full of different transmitters and receivers, and with 5G they will become even more difficult. Is such an integration necessary?

Wi-Fi Development Path

Along with cellular communication one of the most popular wireless networks is Wi-Fi. Like , Wi-Fi is one of our favorite "utilities". We look forward to connecting with WiFi networks almost anywhere, and in most cases we get access. Like most popular wireless technologies, it is constantly under development. The latest released version is called 802.11ac and provides speeds up to 1.3Gbps in the unlicensed 5GHz band. Applications are also being sought for the 802.11ad ultra-high frequency 60 GHz (57-64 GHz) standard. It's a proven and cost-effective technology, but who needs speeds of 3 to 7 Gbps at distances up to 10 meters?

On this moment There are several development projects for the 802.11 standard. Here are a few of the main ones:

  • 11af is the version of Wi-Fi in the white bands of the TV band (54 to 695 MHz). Data is transmitted in local 6- (or 8) MHz bandwidths that are not busy. Data rates up to 26 Mbps are possible. It is sometimes referred to as White-Fi, and the main attraction of 11af is that the possible range on low frequencies is many kilometers and no line-of-sight (NLOS) (open area operation only). This version of Wi-Fi is not yet in use, but has potential for IoT applications.
  • 11ah - labeled HaLow, is another Wi-Fi variant that uses the unlicensed 902-928 MHz ISM band. It is a low-power, low-rate (hundreds of kbit/s) service with a range of up to a kilometer. The goal is application in IoT.
  • 11ax - 11ax is an upgrade to 11ac. It can be used on the 2.4 and 5 GHz bands, but will most likely operate on the 5 GHz band solely to use the 80 or 160 MHz bandwidth. Along with 4 x 4 MIMO and OFDA/OFDMA, peak data rates of up to 10 Gb/s are expected. Final ratification will not take place until 2019, although pre-release versions are likely to be complete.
  • 11ay is an extension of the 11ad standard. It will use the 60GHz frequency band and the goal is at least 20Gbps data rate. Another goal is to extend the range to 100 meters in order to have more applications such as return traffic for other services. This standard is not expected to be released in 2017.

Wireless networks for IoT and M2M

Wireless is definitely the future of the Internet of Things (IoT) and Machine-to-Machine (M2M). Although wired solutions are also not excluded, but the desire for wireless communication is still preferable.

Typical for IoT devices is short range, low power consumption, low data transfer rate, battery powered or battery powered with a sensor, as shown in the figure below:

An alternative could be some kind of remote actuator, as shown in the figure below:

Or a combination of the two is possible. Both typically connect to the internet via a wireless gateway, but can also connect via a smartphone. The connection to the gateway is also wireless. The question is, what wireless standard will be used?

Wi-Fi becomes the obvious choice, as it's hard to imagine a place without it. But for some applications, it will be redundant, and for some, it will be too energy-intensive. Bluetooth is another good option, especially the version with low power consumption(BLE). New additions to the Bluetooth network and gateway make it even more attractive. ZigBee is another ready and waiting alternative, and let's not forget Z-Wave. There are also several variants of 802.15.4, such as 6LoWPAN.

Add to them latest options, which are part of energy-efficient long-range networks (Low Power Wide Area Networks (LPWAN)). These new wireless options offer network connections longer range, which is usually not possible with the traditional technologies mentioned above. Most of them operate in the unlicensed spectrum below 1 GHz. Some of the newest competitors for IoT applications are:

  • LoRa is a Semtech invention and maintained by Link Labs. This technology uses linear frequency modulation (chirp) at a low data rate to obtain a range of up to 2-15 km.
  • Sigfox is a French development that uses an ultra narrowband modulation scheme at a low data rate to send short messages.
  • Weightless - Uses television white spaces with cognitive radio techniques for longer ranges and data rates up to 16 Mbps.
  • Nwave is similar to Sigfox, but we haven't been able to gather enough information at the moment.
  • Ingenu - unlike others, this one uses the 2.4 GHz band and a unique random phase multiple access scheme.
  • Halow is 802.11ah Wi-Fi, described above.
  • White-Fi is 802.11af, described above.

Cellular is definitely an alternative to IoT, as it has been the backbone of machine-to-machine (M2M) communications for over 10 years. Machine-to-machine communications mainly use 2G and 3G wireless modules to monitor remote machines. While 2G (GSM) will eventually be phased out, 3G will still be alive.

A new standard is now available: LTE. Specifically, it is called LTE-M and uses a shortened version of LTE in a 1.4 MHz bandwidth. Another version of NB-LTE-M uses 200 kHz bandwidth to operate at a lower speed. All of these options can be used existing networks LTE with updated software. Modules and chips for LTE-M are already available, as they are on Sequans Communications devices.

One of the biggest problems with the Internet of Things is the lack of a single standard. And in the near future, most likely, he will not appear. Perhaps in the future, there will be several standards, only how soon?

network technology - this is an agreed set of standard protocols and software and hardware that implements them (for example, network adapters, drivers, cables and connectors), sufficient to build a computer network. The epithet "sufficient" emphasizes the fact that this set is the minimum set of tools with which you can build a workable network. Perhaps this network can be improved, for example, by allocating subnets in it, which will immediately require, in addition to the Ethernet standard protocols, the use of the IP protocol, as well as special communication devices - routers. The improved network will likely be more reliable and faster, but at the cost of building on the Ethernet technology that formed the basis of the network.

The term "network technology" is most often used in the narrow sense described above, but sometimes its extended interpretation is used as any set of tools and rules for building a network, for example, "end-to-end routing technology", "secure channel technology", "IP technology". networks."

The protocols on the basis of which a network of a certain technology is built (in the narrow sense) were specially developed for joint work, therefore, the network developer does not require additional efforts to organize their interaction. Network technology is sometimes referred to as basic technologies, bearing in mind that the basis of any network is built on their basis. Along with Ethernet, well-known local area network technologies such as Token Ring and FDDI, or X.25 area network technologies and frame relay, can serve as examples of basic network technologies. To obtain a workable network in this case, it is enough to purchase software and hardware related to one basic technology - network adapters with drivers, hubs, switches, cabling, etc. - and connect them in accordance with the requirements of the standard for this technology.

Creation of standard LAN technologies

In the mid-80s, the situation in local networks began to change dramatically. Standard technologies for connecting computers to a network have been established - Ethernet, Arcnet, Token Ring. Personal computers served as a powerful stimulus for their development. These mass-produced products were ideal elements for building networks - on the one hand, they were powerful enough to run networking software, and on the other, they clearly needed to pool their processing power to solve complex problems, as well as separate expensive peripherals and disk arrays. Therefore, personal computers began to predominate in local networks, not only as client computers, but also as data storage and processing centers, that is, network servers, displacing minicomputers and mainframes from these familiar roles.

Standard network technologies have turned the process of building a local network from an art into a chore. To create a network, it was enough to purchase network adapters of the appropriate standard, such as Ethernet, a standard cable, connect the adapters to the cable with standard connectors, and install one of the popular network operating systems, such as NetWare, on the computer. After that, the network began to work and the connection of each new computer did not cause any problems - naturally, if a network adapter of the same technology was installed on it.

Local networks in comparison with global networks have brought a lot of new things to the way of organizing the work of users. Access to shared resources has become much more convenient - the user could simply view the lists of available resources, and not remember their identifiers or names. After connecting to a remote resource, it was possible to work with it using the commands already familiar to the user for working with local resources. The consequence and at the same time the driving force of this progress was the emergence of a huge number of non-professional users who did not need to learn special (and rather complex) commands for networking. And the developers of local networks got the opportunity to realize all these conveniences as a result of the appearance of high-quality cable communication lines, on which even network adapters of the first generation provided data transfer rates up to 10 Mbps.

Of course, the developers of global networks could not even dream of such speeds - they had to use the communication channels that were available, since the laying of new cable systems for computer networks thousands of kilometers long would require enormous capital investments. And "at hand" were only telephone communication channels, poorly adapted for high-speed transmission of discrete data - a speed of 1200 bps was a good achievement for them. Therefore, the economical use of the bandwidth of communication channels has often been the main criterion for the effectiveness of data transmission methods in global networks. Under these conditions, various procedures for transparent access to remote resources, which are standard for local networks, have long remained an unaffordable luxury for global networks.

Modern tendencies

Today, computer networks continue to develop, and quite rapidly. The gap between local and global networks is constantly shrinking, largely due to the emergence of high-speed territorial communication channels that are not inferior in quality to cable systems of local networks. In global networks, resource access services are emerging that are as convenient and transparent as local network services. Similar examples in in large numbers demonstrates the most popular global network - the Internet.

Local networks are also changing. Instead of a passive cable connecting computers, a variety of communication equipment appeared in them in large quantities - switches, routers, gateways. Thanks to such equipment, it became possible to build large corporate networks with thousands of computers and a complex structure. There has been a resurgence of interest in large computers, largely because after the euphoria about the ease of use of personal computers subsided, it became clear that systems with hundreds of servers were more difficult to maintain than a few large computers. Therefore, on a new round of the evolutionary spiral, mainframes began to return to corporate computing systems, but already as full-fledged network nodes that support Ethernet or Token Ring, as well as the TCP / IP protocol stack, which became thanks to the Internet network standard de facto.

Another very important trend has emerged, affecting equally both local and global networks. They began to process information that was previously unusual for computer networks - voice, video images, drawings. This required changes to protocols, network operating systems, and communications equipment. The complexity of transmitting such multimedia information over a network is related to its sensitivity to delays in the transmission of data packets - delays usually lead to distortion of such information in the end nodes of the network. Since traditional computer network services such as file transfer or e-mail generate latency-insensitive traffic and all network elements are designed with it in mind, the advent of real-time traffic has led to big problems.

Today, these problems are solved in various ways, including with the help of ATM technology specially designed for the transmission of various types of traffic. However, despite the significant efforts made in this direction, it is still far from an acceptable solution to the problem, and much remains to be done in this area. in order to achieve the cherished goal - the fusion of technologies not only of local and global networks, but also of technologies of any information networks - computer, telephone, television, etc. Although today this idea seems to many to be a utopia, serious experts believe that the prerequisites for such a synthesis are already exist, and their opinions differ only in the estimation of the approximate terms of such an association - terms are called from 10 to 25 years. Moreover, it is believed that the basis for the unification will be the packet switching technology used today in computer networks, rather than the circuit switching technology used in telephony, which should probably increase interest in networks of this type.

If you notice an error, select a piece of text and press Ctrl + Enter
SHARE: