Windows.  Viruses.  Laptops.  Internet.  Office.  Utilities.  Drivers

On the most widely discussed issues in the information technology (IT) industry: Gigabit Ethernet vs. ATM, Windows NT vs. everyone else, intranets, etc. Participating in the conversation were: Daniel Brier and Christine Heckart, President and Director of TeleChoice, respectively; Scott Bradner, consultant information technology from Harvard University; Tom Noll, President of CIMI Corporation; Mark Gibbs, president of Gibbs & Co.; Dave Kearns is a freelance journalist and consultant based in Austin, NY. Texas.

NW: Many readers have difficulty choosing a strategy for building a local network (LAN) backbone. Gigabit Ethernet, ATM, Fast Ethernet, IP switching - there are enough technologies, but it is unclear what the main direction of development in this area is. What aspects should network administrators consider when planning their next generation network?

Noll: The key issue is scalability. The best technology building a backbone will be one that can be integrated with existing networks without a large investment of time and money. This means that both ATM and Gigabit Ethernet will find use if they cost about the same. Cost control becomes a major concern.

Gibbs: The main question is, can you afford it? All major implementations must be preceded by pilot projects. Over the next six months, key aspects of the new era of high-speed rail technology are set to take shape more clearly. We will find out which standards will be approved, which manufacturers will be stable and how problematic these technologies are in terms of implementation and subsequent maintenance.

Heckart: When making a decision in this area, there are only three main issues to consider: price, performance and durability. The problem is that analysts talk about these things in absolute terms, but network administrators don't. It all depends on what the specific network environment is, what applications are used, what the tasks are, what the budget is allocated, etc.

What works well enough for one company (or even a group of users) may not hold water for another. You need to define what "good enough" means, and then implement a solution that is cheap enough, performs well enough, and lasts long enough to meet the challenges of today and the foreseeable future. The problems that many users encounter are caused by their attempts to determine what is best. But the “best” changes every week and cannot be realized because by the time it does, it will no longer be the best.

Brier: Too many managers try to find homogeneous solutions, when the best result usually comes from a combination different technologies. Many companies will find a combination of ATM, Fast Ethernet and Ethernet (or some other combination) as different offices and user groups have different needs. The main thing is that the choice of solution is based on real needs, and not on an attempt to implement the latest and greatest technology.

Cairns: The vast majority of existing network connections are based on Ethernet technology, and this will continue to be the case in the future. Currently, there are no compelling reasons to switch to another technology for organizing highways. Ten megabits for desktop connections and 100 Mbit/s for backbone connections continue to “work” (and not bad) in most existing networks. Planning to move to Gigabit Ethernet for the backbone and 100 Mbps for the main parts of the network (and eventually for desktop systems) seems to be quite reasonable.

The trick is that throughput The network is not always a bottleneck. The performance of servers, routers, switches, disk channels, bus speed, buffer volumes and five or six other things require no less close attention. Channels that are too “fat” simply waste resources.

Bradner: I would say that the biggest problem for network designers is the combination of partial awareness and complete conviction that they are right. Too many decisions about the direction of corporate networking have been made based on general considerations rather than an analysis of the real needs of the existing networking community. Someone in management read a report from a major consulting company that “ATM is the answer” (what exactly was the question?) and made a decision accordingly. In reality, what should have been done was to perform a technical analysis of specific network needs and design according to the results of that analysis. Many technologies are promising because every network is different.

NW: ATM vs Gigabit Ethernet - real rivalry or nonsense?

Noll: In reality, it is a competition between different network planning paradigms, which is often presented as a technology competition. The Gigabit Ethernet paradigm says: "Invest in bandwidth, not in managing it, because it is cheap enough to more than meet your network's needs." And the ATM paradigm is: "Managing bandwidth is very important; bandwidth cannot be left to chance, so you need a network architecture that allows you to control it." Price may be a deciding factor, but buyers are strongly attracted to the simplicity of the approach provided by Gigabit Ethernet. The problem is that we would like this competition to take place at the level of technical capabilities, but in reality it turns out completely differently.

Gibbs: This rivalry is driven by the enormous amount of investment that has been made in previous technologies. If current technologies turn out to be much simpler and cheaper, then switching to them promises solid earnings for manufacturers. Manufacturers of ATM products do not want to invest in this technology the money was wasted, and they will try to “throw stones” at the manufacturers of Gigabit Ethernet products.

Heckart: The absurdity of this and other ATM-related issues is that statements that could only be perceived by the network elite are beginning to be exaggerated by the general public. In fact, this is a question for a limited contingent. However, Gigabit Ethernet has more stable ground, more supporters, best channels supplies and virtually everything needed to win the war. ATM has a more agile army, armed with more sophisticated weapons - but numbers and correct positioning usually win.

For any buyer who does not require ATM coverage additional features- for example, guaranteed quality of service (QoS), - the simplest solution is to choose a technology that is convenient enough and allows you to solve existing problems. Unlimited bandwidth solves, although not everything, but a lot of things network problems, and Gigabit Ethernet provides unlimited bandwidth for most network environments.

Brier: This is a classic example of elegant approach competing with established views. To win a war, it is enough to win most of the battles. Many projects have been implemented on the basis of ATM - from networks of telecommunication companies to corporate and home offices. Carriers such as Ameritech, PacBell, SBC and BellSouth have already realized that ATM technology can be very promising for corporate and home offices. The question now is how far this technology will penetrate into home and office networks. If you use ATM at home to connect five devices, isn't that a home LAN? Maybe. Therefore, ATM will become more widespread than many people think.

Cairns: This rivalry is only real from a marketing point of view, but if you do not pay attention to advertising, the answer will be obvious. Gigabit Ethernet will become the dominant technology for the same reason that 10 Mbps Ethernet defeated Token Ring and 100 Mbps Ethernet defeated FDDI. More and more network administrators understand the benefits of Ethernet and feel more comfortable using it.

Bradner: The competition between the technologies under consideration exists in the field of campus backbone networks. It is easy to see that Gigabit Ethernet will make it easier and less expensive (compared to ATM) to meet most (if not all) of the current campus backbone needs. The only doubt is QoS. However, QoS capabilities are rarely used in today's campus networks. This is because existing applications, as well as the Ethernet and Token Ring networks to which almost all desktop systems are connected, do not support QoS features.

There is no competition in the field of wide area networks (WAN). Gigabit Ethernet does not support long-distance connections (maximum 3 km) and requires a dedicated fiber optic line. I also doubt that there will be much competition in the building backbone space, where Fast Ethernet and Gigabit Ethernet have the potential to completely displace ATM.

NW: Many people are now talking about network-centric computing, arguing that we are moving away from heavily loaded applications desktop computers to thinner clients that will run Java and ActiveX applets. Is this worth believing?

Noll: Nonsense! Nothing more than another attempt to resurrect old idea diskless workstations, which means the replacement of “dumb” terminals with “half-dumb” network computers and the displacement of “smart” PCs.

Gibbs: In principle, everything is correct, but there are a number of problems associated with this. The transition to thin clients is complex, and it will be a long time before leading software vendors take serious steps to port their products to new platforms. The idea of ​​a networked computer is good, but it lacks practicality: users will not be able to abandon their PCs in less than three years, by which time the next generation of desktop applications will have matured.

Not all problems are related to the use of "fat" applications. For network computers will require more network bandwidth than modern applications; In addition, the requirements for server performance and the amount of disk memory will increase significantly. And of course - protection, protection and protection. It is not yet entirely clear what level of protection Java applets and ActiveX will be able to provide, although it seems that the latter are much less convincing in this regard.

Heckart: I would rather say that there is some truth in this. Everyone knows that the problem that networked computers are trying to solve actually exists. We're tired of installing new programs and finding that they eat up the last inches disk space our computer, which a year ago was considered the latest achievement of technology (it’s especially offensive if 90% functionality embedded in these myriad lines of code are not used 98% of the time). Loading what you need, exactly when you need it, is a great idea. I think networked computers could change network architecture, the way software is sold, and network services. Perhaps this is all for the better.

Brier: In my opinion, the situation is being over-dramatized. Some of our customers are looking to deploy next-generation fax devices that use IP networks as the transport mechanism. These devices have elements of the computers you are talking about. What should we call them - “thin” clients, “weak” PCs, or something else? But we still call them fax devices, which solve completely specific tasks. Once again, I want to emphasize that the elements of a device can have very different characteristics, and labeling only confuses the matter.

Cairns: Today's programmers do not think about the compactness of the resulting code, as they did, say, 10-15 years ago. As a result, users spend a lot of time waiting for individual modules of modern applications to load from the network, and ultimately give up using them.

Bradner: Everything in these judgments is correct - except that the orientation towards a homogeneous set of requirements can be traced. There seems to be an urgent need to find one answer to everything existing issues- perhaps because the real world is too complex and messy. In many places, applications run on "dumb" terminals or X-Window terminals and the thin client-to-network model works just fine. But there are plenty of other places where users get great work done on local computers, which are perfect for solving their problems and do not require replacement.

NW: Another hot topic that is widely discussed is Quality of Service (QoS). What are the key QoS capabilities that network administrators should consider, and what should they do to implement them?

Bradner: This is a very old story, dating back to at least 1964, when the possibility of creating data networks based on packet transmission rather than connection establishments first began to be widely discussed. Proponents of the traditional approach even then condemned the idea of ​​​​networks based on packet transmission. For many years (thankfully, they are in the past), IBM specialists argued that it was impossible to build a corporate data network based on TCP / IP, since this protocol is based on the transmission of routed or switched packets; the corporate network, they argue, needs guaranteed QoS, which is only achievable in connection-oriented networks.

There are three types of QoS that make sense to talk about: probabilistic QoS, which with a high degree of probability guarantees the provision of network and server resources sufficient to perform certain tasks at a given time; Application-based QoS, in which specific resources are reserved for each IP call or resource-intensive application (when it starts); Class-based QoS, which defines different levels (classes) of network usage and treats network traffic differently for each class.

Probabilistic QoS is used quite actively in modern networks and works especially well in campus networks with high bandwidth. I would see class-based QoS as the next step in QoS, and on-demand QoS as an exciting prospect with many scalability, authentication, and accounting issues to address.

Noll: The concept of QoS is fairly well defined, although not everyone agrees with it. Peak and average speed data transmission, delay value and its permissible fluctuations, permissible level errors - all of these are well perceived as key parameters. The question is not what QoS is, but what needs to be done to ensure it. There are two options: manage the bandwidth or spend money on expanding it. The network administrator must evaluate the costs of each approach and weigh their advantages and disadvantages. However, he must remember that the distribution of resources is like taxation - in order to give something to some, you need to take it from others. This is why purchasing additional bits per second is such an attractive approach for users.

Heckart: Recently, the term QoS has come to mean very different things. Unfortunately, many service providers define QoS in a way that would require a PhD to understand, and to test QoS provision requires at least a protocol analyzer. What are the benefits for end users can we talk then?

Sprint has a good idea of ​​providing a specific quality of service that matches a user's specific applications. And although the model itself still needs improvement, all providers should remember the principle of BSP (Keep It Simple, Blockhead!). Many managers are concerned about network availability (uptime), response time, and performance. For some applications, such as real-time voice, network latency can be added to this list.

One of the biggest concerns for network administrators regarding the latest QoS specifications is that it is virtually impossible to track whether what you receive is what you were promised. An ideal provider should clearly define what it means by the concept of QoS, provide the customer with the opportunity to check the implementation of service quality, as well as a system of automatic penalties for failure to provide the agreed level of service. The benefit of QoS is that users will be able to more intelligently select services and better understand what types of connections (frame relay, leased lines or ATM) the best way meet the needs of a specific office or application.

Brier: I'm looking at QoS in relation to ATM/WAN networks, where individual applications provided different access to resources - depending on what they are trying to do. To take advantage of QoS, network administrators need to quantify their needs. This will bring them back to a real understanding of the needs of each office and application and make them realize that there is no one solution that fits all.

Cairns: For the user, QoS means: "Can I do what I want, when I want?" For a network administrator, this translates into terms such as “access” (100 percent availability of all services through clustering and redundancy), “performance” (predictable throughput anytime, anywhere), and “directory services” (easy access to objects and services). .

NW: Let's return to the question of "thin" clients for a moment. Manufacturers of NetPC and NC promise to reduce the cost of administering networks and systems. Will they really be able to deliver the big savings they're expected to achieve, or will they simply shift costs to networks and servers?

Cairns: There is a big difference between NetPC and NC. NC requires more powerful servers and higher network bandwidth. But in any case, expenses are inevitable - for new equipment and infrastructure, for training and support.

Noll: Consign NetPC and NC to the scrap heap after diskless workstations. Turn into heating units or fun, high-tech metal collages mounted on concrete pedestals in front of company headquarters. NC is a replacement for "dumb" terminals, and NetPC is nothing more than advertising hype.

Gibbs: There is still no catalog of applications and tools that could make us believe in the reality of the existence of networked computers. In addition, the costs of upgrading the infrastructure are expected to be very high. Most companies will need two to three years before they can fully amortize their investment, so the use of networked computers is still only practical. test systems. Real experiments have not yet been carried out, and they may prove simply invaluable. Of course, we need to continue to monitor market developments, but I would recommend not to get too excited until there are real applications and ready-made systems based on NC or NetPC, and not just bare boxes.

Bradner: I don't see much difference between NetPC, NC and terminals and I doubt they will differ much in price. The corporation is unlikely to be able to really save any money by throwing out the old 3270 terminals and replacing them with NC computers (unless you take into account the savings from not repairing the 3270). I also doubt that switching from "real" PCs to NetPC or NC will achieve significant savings. The general set of costs is well known - training, software, etc. I think these and many other costs will bring everything into balance.

NW: Talking to some people, it seems like intranets are the corporate computing platform of today. What steps should relevant professionals take to get close to creating an intranet? Which applications will forever remain “behind” the intranet? What's the biggest mistake when it comes to intranets?

Heckart: Intranets are well suited for organizations that need to provide access to information to a large number of employees or organize electronic communication. This is why you first need to create the network itself. The biggest mistake when building an intranet is the lack of a clear understanding of what needs to be achieved and, accordingly, what needs to be done. As a result, many separate intranets are created for different user groups and different network resources, which reduces or eliminates overall cost savings.

Noll: I don't know who would consider an intranet to be a corporate computing platform. After conducting special surveys, we found that although more than 90% of companies claim to be committed to the idea of ​​intranets, only 7% have a real understanding of what an intranet is and how it differs from a regular corporate information or IP network. If you try to objectively evaluate what an intranet is, it becomes clear that it does not impose any restrictions on the use of applications (not counting those inherent in any other data networks), except for their cost.

Gibbs: Applications that have significant database requirements and those that have very complex functionality, such as real-time multimedia, will never be intranet-compatible.

Brier: The biggest mistake with intranets is to assume that details are overlooked. In my opinion, good manager must define the intranet in very general terms - as the collection of information shared within an organization - and begin to prioritize the benefits that can be achieved most quickly and most effectively by moving to the internal information highway, or intranet.

Cairns: The intranet is good way reduce paper consumption and ensure timely access to necessary information. Some of the best uses for intranets today include human resources, marketing communications, automating questionnaires (such as travel reports or vacation requests), and project management—areas where you can combine traditional operational information with data warehouses. However, data entry applications are not yet ready for use on intranets.

An intranet should attract users no less than Internet sites. To do this, it is necessary to pay close attention to design issues and the quality of the service provided. Poor design is a serious mistake.

Bradner: Intranets are another example of something being presented as a one-size-fits-all solution without taking into account real needs. For most people, intranets are Web-based network services. However, today they are trying to present them as a single answer to all questions. I think that within the next few years TCP/IP will become the primary network protocol for virtually all enterprise networks; the alternative would be SNA only (on legacy systems). But I'm not ready to talk with the same confidence about what applications will be used. Basically, to Web-based and Java systems Applications with complex data processing can be accommodated, but in many cases dedicated desktop software will remain a much more suitable solution.

NW: Many of our readers would like to move on a long-term basis to using the Internet as a backbone for distributed corporate network. Is this a reasonable task?

Noll: This view is based on a set of unrealistic economic assumptions. People see that they can get unlimited Internet access for $20. per month and think: “If for 20 bucks I can get a speed of 28 kbps, then for 140 bucks I should be able to get a T-1 channel.” Bandwidth “costs money,” and someone always pays that money. There is a kind of subsidization going on on the Internet: users who use the Internet little pay for those who use it actively. If corporate America gains unlimited access to the Internet, service providers will go under within one week. Internet prices should not go down. Some customers are offered reduced rates, but this is only possible if a limited number of people benefit.

Gibbs: Yes, the economic appeal is there, and when combined with virtual private networks (VPNs) and the willingness of Internet providers to enter into contracts for guaranteed QoS, it all looks very plausible. Companies need to move away from their private network infrastructures as quickly as possible.

Heckart: Companies would like to make their network cheap and ubiquitous so that it can be used for many tasks. For some remote offices, the Internet is well suited in this capacity, but for other offices and applications it is not, but tomorrow this situation may change.

The industry is likely to create multiple interconnected intranets, extranets, and internets designed to support different applications and user communities. Similar networks will emerge over the next few years and will largely replace the private and public networks used today for voice, fax, video and data. The services provided by these networks will not be cheap, but their costs will be several orders of magnitude less than the current costs of private networks.

The main barrier to this bright future is not technology, but the huge profits of current service providers, whose activities will be significantly limited by the transition to Internet-based services.

Brier: There is no reason why intranet applications cannot be accessed from frame relay and ATM networks in the same way. Why give them up? You have solutions that can be accessed from anywhere different networks, and using only the Internet for this would be a mistake. This is just one possible transport mechanism.

Cairns: It is not wise to do this at this time because you will lose the company's control over the use of the corporate backbone. At best, the Internet should be considered as a backup channel that can be used in the event of a private backbone failure. Saving a few dollars is not worth giving up the reliability, control, and security that private networks provide. This is the equivalent of the CIO ditching his car and taking the bus...

Bradner: Would we feel better if the Internet was called a national information infrastructure provided by a telecommunications company? This is exactly what the Internet is becoming, and exactly what the proponents of the National Information Infrastructure, which was pushed so hard by government and the press a few years ago, were proposing to replace the Internet. I disagree with the assertion that private networks have some capabilities that the Internet cannot provide - especially considering that almost all corporate wide-area networks use TCP/IP. Based on the set of functions, it is very difficult to distinguish between private networks and public TCP/IP networks. Over the next few years, I expect class-based QoS features to become widespread on the Internet, eliminating one of the last significant advantages of private data networks over public ones.

NW: Will Windows NT take over the world? Are there any significant drawbacks to this OS?

Noll: NT has already taken over the world, but Unix system vendors don't yet know that they're out of the game. Identifying weaknesses is important, but the most important feature of any operating system is how users feel about it. And they treat NT better than any other server or multi-user system. Unix fans, go ahead and send your evil emails around the world! I only predict the future, but I don't make it.

Gibbs: The anti-Microsoft camp that flies the "Java" flag is very active, and indirectly this is hurting NT. There is no doubt that NT 4.0 is a great OS, but it cannot satisfy all needs by replacing NetWare and Unix. I would give NT a dominant position, but would not award the final victory.

Cairns: NT is a good replacement for Unix in the application server market. But this OS is still very far from taking a dominant position in the network OS market. This may not happen, since it doesn't look like Microsoft has gotten around to networking. This desktop software maker will always remain so.

Electronics underpin almost all communication. It all started with the invention of the telegraph in 1845, followed by the telephone in 1876. Communications have constantly improved, and advances in electronics, which have occurred quite recently, have laid a new stage in the development of communications. Today wireless communication has reached new level and confidently occupied the dominant part of the communications market. And new growth in the wireless communications sector is expected thanks to the evolving cellular infrastructure, as well as modern technologies, such as . In this article we will look at the most promising technologies for the near future.

4G status

4G in English means Long Term Evolution (LTE). LTE is OFDM technology, which is the dominant structure of the cellular communication system today. 2G and 3G systems still exist, although the introduction of 4G began in 2011 - 2012 Today LTE is mainly implemented largest operators in the USA, Asia and Europe. Its deployment is not yet complete. LTE has gained immense popularity among smartphone owners, as high data transfer speeds have opened up such opportunities as streaming video for effective movie watching. However, everything is not so perfect.

Although LTE promised download speeds of up to 100 Mbps, this was not achieved in practice. Speeds of up to 40 or 50 Mbit/s can be achieved, but only under special conditions. With a minimum number of connections and minimal traffic, such speeds can very rarely be achieved. The most likely data rates are in the ranges of 10 – 15 Mbit/s. During peak hours, the speed drops to several Mbit/s. Of course, this does not make the implementation of 4G a failure, it means that its potential has not yet been fully realized.

One of the reasons why 4G does not provide the advertised speed is that a large number of consumers. If it is used too intensively, the data transfer speed is significantly reduced.

However, there is hope that this can be corrected. Most operators providing 4G services have not yet implemented LTE-Advanced technology, an improvement that promises to improve information transfer speeds. LTE-Advanced uses carrier aggregation (CA) to increase speed. “Carrier aggregation” involves combining standard LTE bandwidth up to 20 MHz into 40 MHz, 80 MHz or 100 MHz portions to increase capacity. LTE-Advanced also has an 8 x 8 MIMO configuration. Support for this feature opens up the potential for data speeds up to 1 Gbps.

LTE-CA is also known as LTE-Advanced Pro or 4.5G LTE. These technology combinations are defined by the 3GPP Standards Development Group in version 13. It includes carrier aggregation as well as Licensed Assisted Access (LAA), a technique that uses LTE in the unlicensed 5 GHz Wi-Fi spectrum. It also deploys LTE-Wi-Fi link aggregation (LWA) and dual connectivity, allowing the smartphone to "talk" to both the small access point and the Wi-Fi access. There are too many details in this implementation that we won't go into, but the overall goal is to extend the lifespan of LTE by reducing latency and increasing data rates to 1 Gbps.

But that's not all. LTE will be able to deliver higher performance as operators begin to simplify their small cell strategy, delivering faster data rates for more subscribers. Small cells are just miniature cells. base stations, which can be installed anywhere to fill macrocell coverage gaps, adding performance where needed.

Another way to improve productivity is using Wi-Fi. This method ensures fast downloads to the nearest Wi-Fi hotspot when available. Only a few operators have made this available, but most are considering an improvement to LTE called LTE-U (U for unlicensed). This is a method similar to LAA, which uses the unlicensed 5GHz band for fast downloads when the network cannot handle the load. This creates a spectrum conflict with the latter, which uses the 5 GHz band. To achieve this, certain compromises have been developed.

As we can see, the potential of 4G has not yet been fully realized. All or most of these improvements will be implemented in the coming years. It's also worth noting that smartphone manufacturers will also make hardware or software changes to improve LTE performance. These improvements will most likely occur when the mass adoption of the 5G standard begins.

5G Discovery

There is no 5G as such yet. So, loud statements about “a completely new standard that can change the approach to wireless transmission information" is too early. Although, some Internet service providers are already starting to debate who will be the first to implement the 5G standard. But it is worth remembering the controversy of recent years about 4G. After all, there is no real 4G (LTE-A) yet. However, work on 5G is in full swing.

The 3rd Generation Partnership Project (3GPP) is working on the 5G standard, which is expected to be rolled out in the coming years. The International Telecommunication Union (ITU), which will give its blessing and administer the standard, says 5G should be fully available by 2020. However, some early versions 5G standards will still appear in the competition between providers. Some 5G requirements will appear as early as 2017–2018 in one form or another. Full implementation of 5G will be no easy task. Such a system will be one of the most complex, if not the most complex, of wireless networks. Its full deployment is expected by 2022.

The rationale behind 5G is to overcome the limitations of 4G and add capabilities for new applications. The limitations of 4G are mainly subscriber bandwidth and limited data rates. Cellular networks have already moved from voice technologies to data centers, but further performance improvements are needed in the future.

Moreover, a boom in new applications is expected. These include HD 4K video, virtual reality, Internet of Things (IoT), and the use of machine-to-machine (M2M) structures. Many still predict between 20 and 50 billion devices online, many of which will connect to the Internet via cellular networks. While most IoT and M2M devices operate at low data transfer speeds, working with streaming data (video) requires high Internet speeds. Other potential applications that will use the 5G standard include smart cities and communications for road transport safety.

5G is likely to be more revolutionary than evolutionary. This will involve the creation of a new network architecture that will be overlaid on the 4G network. New network will use distributed fine cells with fiber or millimeter wave back channel, and will also be economical, non-volatile and easily scalable. In addition, 5G networks will be more software than hardware. Will also be used software network(SDN), network function virtualization (NFV), self-organizing network (SON) methods.

There are also several other key features:

  • Using millimeter waves. The first versions of 5G may use the 3.5 GHz and 5 GHz bands. Frequency options from 14 GHz to 79 GHz are also being considered. A final option has not yet been selected, but the FCC says a choice will be made soon. Testing is carried out at frequencies of 24, 28, 37 and 73 GHz.
  • New modulation schemes are being considered. Most of them are some variant of OFDM. Two or more schemes may be defined in a standard for different applications.
  • Multiple input multiple output (MIMO) will be included in some form to improve range, data rate, and communication reliability.
  • The antennas will have phased arrays with adaptive beamforming and steering.
  • Lower latency is the main goal. Less than 5 ms is specified, but less than 1 ms is the target.
  • Data rates from 1 Gbps to 10 Gbps are expected in 500 MHz or 1 GHz bandwidths.
  • The chips will be made from gallium arsenide, silicon germanium and some CMOS.

One of the biggest challenges in the implementation of 5G is expected to be the integration of this standard into Cell phones. IN modern smartphones There are already a lot of different transmitters and receivers, and with 5G they will become even more complex. Is such integration necessary?

Wi-Fi development path

Along with cellular communication One of the most popular wireless networks is located - Wi-Fi. Like , Wi-Fi is one of our favorite “utilities”. We look forward to connecting to Wi-Fi networks almost anywhere, and in most cases we get access. Like most popular wireless technologies, it is constantly under development. The latest version released is called 802.11ac and provides speeds of up to 1.3 Gbps in the unlicensed 5 GHz frequency band. Applications are also being sought for 802.11ad ultra-high frequency 60 GHz (57-64 GHz). It's a proven and cost-effective technology, but who needs speeds of 3 to 7 Gbps over distances of up to 10 meters?

On this moment There are several development projects for the 802.11 standard. Here are a few of the main ones:

  • 11af is a version of Wi-Fi in the white bands of the television range (54 to 695 MHz). Data is transmitted in local 6- (or 8) MHz bandwidths, which are not occupied. Data transfer rates up to 26 Mbit/s are possible. Sometimes referred to as White-Fi, the main attraction of 11af is that the possible range is low frequencies is many kilometers and no line of sight (NLOS) (work only in open areas). This version of Wi-Fi is not yet in use, but has potential for IoT applications.
  • 11ah - designated HaLow, is another Wi-Fi variant that uses the unlicensed 902-928 MHz ISM band. It is a low-power, low-speed (hundreds of kbit/s) service with a range of up to a kilometer. The goal is application in IoT.
  • 11ax - 11ax is an upgrade to 11ac. It can be used in the 2.4 and 5 GHz bands, but will most likely operate in the 5 GHz band solely to utilize 80 or 160 MHz bandwidth. Along with 4 x 4 MIMO and OFDA/OFDMA, peak data rates of up to 10 Gbps are expected. Final ratification will not occur until 2019, although preliminary versions are likely to be complete.
  • 11ay is an extension of the 11ad standard. It will use the 60 GHz frequency band, and the goal is at least 20 Gbps data rates. Another goal is to extend the range to 100 meters to have more applications such as return traffic for other services. This standard is not expected to be released in 2017.

Wireless networks for IoT and M2M

Wireless communications are certainly the future of the Internet of Things (IoT) and Machine-to-Machine (M2M) communications. Although wired solutions are also not excluded, the desire for wireless communication is still preferable.

Typical for Internet of Things devices is short distance actions, low power consumption, low communication speed, powered by battery or battery with sensor, as shown in the figure below:

An alternative would be some kind of remote actuator, as shown in the figure below:

Or a combination of these two devices is possible. Both typically connect to the internet via a wireless gateway, but can also connect via a smartphone. The connection to the gateway is also wireless. The question is, what wireless standard will be used?

Wi-Fi is the obvious choice, since it's hard to imagine a place where it doesn't exist. But for some applications it will be overkill, and for others it will be too power-intensive. Bluetooth is another good option, especially the version with low power consumption(BLE). New additions to the Bluetooth network and gateway make it even more attractive. ZigBee is another ready and waiting alternative, and let's not forget Z-Wave. There are also several 802.15.4 options, for example 6LoWPAN.

Add to them newest options, which are part of energy-efficient long-range networks (Low Power Wide Area Networks (LPWAN)). These new wireless options offer network connections longer range, which is usually not possible using the traditional technologies mentioned above. Most of them operate in unlicensed spectrum below 1 GHz. Some of the newest competitors for IoT applications are:

  • LoRa is an invention of Semtech and is supported by Link Labs. This technology uses linear frequency modulation (LFM) at low data rates to achieve a range of up to 2-15 km.
  • Sigfox is a French development that uses an ultra-narrowband modulation scheme at low data rates to send short messages.
  • Weightless – uses TV white spaces with cognitive radio techniques for longer ranges and data rates up to 16 Mbps.
  • Nwave is similar to Sigfox, but we haven't been able to gather enough information at the moment.
  • Ingenu - Unlike the others, this one uses the 2.4 GHz band and a unique random phase multiple access scheme.
  • Halow is 802.11ah Wi-Fi, described above.
  • White-Fi is 802.11af, described above.

Cellular is definitely an IoT alternative as it has been the backbone of machine-to-machine (M2M) communications for over 10 years. Machine-to-machine communications mainly use 2G and 3G wireless modules to monitor remote machines. While 2G (GSM) will eventually be phased out, 3G will still be around.

Now available new standard: LTE. Specifically, it's called LTE-M and uses a shortened version of LTE in the 1.4 MHz bandwidth. Another version, NB-LTE-M, uses 200 kHz bandwidth to operate at lower speeds. All of these options will be able to use existing LTE networks with updated software. Modules and chips for LTE-M are already available, as are the case with Sequans Communications devices.

One of the biggest problems with the Internet of Things is the lack of a uniform standard. And it most likely won't appear anytime soon. Perhaps in the future, several standards will appear, but how soon?

In order to understand how it works the local network , it is necessary to understand such a concept as network technology.

Network technology consists of two components: network protocols and equipment that ensures the operation of these protocols. Protocol in turn, is a set of “rules” with the help of which computers on the network can connect to each other and exchange information. By using network technologies We have the Internet, there is a local connection between computers in your home. More network technologies called basic, but also have another beautiful name - network architectures.

Network architectures define several network parameters, which you need to have a little idea about in order to understand the structure of the local network:

1)Data transfer speed. Determines how much information, usually measured in bits, can be transmitted over a network in a given time.

2) Format of network frames. Information transmitted through the network exists in the form of so-called “frames” - packets of information. Network frames in different network technologies have different formats of transmitted information packets.

3) Type of signal coding. Determines how, using electrical impulses, information is encoded in the network.

4)Transmission medium. This is the material (usually a cable) through which the flow of information passes - the same one that is ultimately displayed on the screens of our monitors.

5) Network topology. This is a diagram of a network in which there are “edges”, which are cables, and “vertices” - computers to which these cables stretch. Three main types of network designs are common: ring, bus, and star.

6)Method of access to the data transmission medium. Three methods of accessing the network medium are used: deterministic method, random access method and priority transmission. The most common is the deterministic method, in which, using a special algorithm, the time of use of the transmission medium is divided among all computers located in the medium. In the random network access method, computers compete to access the network. This method has a number of disadvantages. One of these disadvantages is the loss of part transmitted information due to collision of information packets on the network. Priority access provides, accordingly, the greatest amount of information to the established priority station.

×

The set of these parameters determinesnetwork technology.

Network technology is now widespread IEEE802.3/Ethernet. It has become widespread thanks to simple and inexpensive technologies. It is also popular due to the fact that servicing such networks is easier. The topology of Ethernet networks is usually built in the form of a “star” or “bus”. Transmission media in such networks use both thin and thick coaxial cable, and twisted pairs and fiber optic cables. The length of Ethernet networks typically ranges from 100 to 2000 meters. The data transfer speed in such networks is usually about 10 Mbit/s. Ethernet networks typically use the CSMA/CD access method, which refers to decentralized random network access methods.

There are also high-speed network options Ethernet: IEEE802.3u/Fast Ethernet and IEEE802.3z/Gigabit Ethernet, providing data transfer rates of up to 100 Mbit/s and up to 1000 Mbit/s, respectively. In these networks, the transmission medium is predominantly optical fiber, or shielded twisted pair.

There are also less common, but still widely used network technologies.

Network technology IEEE802.5/Token-Ring characterized by the fact that all vertices or nodes (computers) in such a network are united in a ring, use the token method of accessing the network, support shielded and unshielded twisted pair, and optical fiber as a transmission medium. Speed ​​in the Token-Ring network is up to 16 Mbit/s. The maximum number of nodes in such a ring is 260, and the length of the entire network can reach 4000 meters.

Read the following materials on the topic:

The local network IEEE802.4/ArcNet is special in that it uses the access method through transfer of authority to transfer data. This network is one of the oldest and previously popular in the world. This popularity is due to the reliability and low cost of the network. Nowadays, such network technology is less common, since the speed in such a network is quite low - about 2.5 Mbit/s. Like most other networks, it uses shielded and unshielded twisted pairs and fiber optic cables as a transmission medium, which can form a network up to 6000 meters long and include up to 255 subscribers.

Network architecture FDDI (Fiber Distributed Data Interface), is based on IEEE802.4/ArcNet and is very popular due to its high reliability. This network technology includes two fiber optic rings, length up to 100 km. This also ensures high data transfer speeds on the network - about 100 Mbit/s. The point of creating two fiber optic rings is that one of the rings carries a path with redundant data. This reduces the chance of losing transmitted information. Such a network can have up to 500 subscribers, which is also an advantage over other network technologies.

Recently, American investor Mike Maples spoke about network technologies as the business of the future, reports Fortune. Maples began investing more than 10 years ago. Before that, he was a private entrepreneur, so investing was a new challenge for him.

Already at that time, he realized that the future belonged to network technologies, and not to companies in their usual sense. That is why the first investments were made in the nascent projects of Twitter and Twitch. A little later, together with partner AnnMiura-Ko, the projects Lyft, Okta and many others were implemented.

Today, Mike Maples is convinced of the following:

– Networks based on software, will be the most expensive business and will eventually displace traditional companies

– Networks can significantly improve the well-being of people in all regions of the world

– Network companies will face stiff resistance from governments and traditional companies

To confirm his words, Maples turns to history. He says that the creation of the steam engine and railway simultaneously with the advent of the stock market, it allowed business to step far forward, which, in turn, led to a jump in the well-being of the population. From 1800 to 2000, Maples argues, real incomes grew by an average of 14 times, something never seen before in such a relatively short period of history.

Previously, large corporations had significant advantages due to production volumes and a significant division of labor. However, today even the largest traditional corporations are losing out to networks, since the latter have a huge number of users who themselves create so-called network effects, including the instant promotion of various ideas, opinions, goods and services.

You don't have to look far for examples. Uber and Lyft are leaders in the US private transportation market; Airbnb is the leading real estate rental service, and Apple company 10 years ago I revolutionized the idea of ​​a mobile phone.

Now we can all already observe the escalating struggle between traditional corporate systems and network ones. Uber and Airbnb are facing pressure from local authorities over taxes and alleged "uncompetitive" competition practices. Maples believes that the development of network technologies should ultimately lead to prosperity for people, although in the intermediate stages of development certain industries react to progress by cutting jobs.

If you notice an error, select a piece of text and press Ctrl+Enter
SHARE: