Security on the Internet
How do you secure something that is changing faster than you can fix it? The Internet has had security problems since its earliest days as a pure research project. Today, after several years and orders of magnitude of growth, is still has security problems. It is being used for a purpose for which it was never intended: commerce. It is somewhat ironic that the early Internet was design as a prototype for a high-availability command and control network that could resist outages resulting from enemy actions, yet it cannot resist college undergraduates. The problem is that the attackers are on, and make up apart of, the network they are attacking. Designing a system that is capable of resisting attack from within, while still growing and evolving at a breakneck pace, is probably impossible. Deep infrastructure changes are needed, and once you have achieved a certain amount of size, the sheer inertia of the installed base may make it impossible to apply fixes.
The challenges for the security industry are growing. With the electronic commerce spreading over the Internet, there are issues such as nonrepudiation to be solved. Financial institutions will have both technical concerns, such as the security of a credit card number or banking information, and legal concerns for holding individuals responsible for their actions such as their purchases or sales over the Internet. Issuance and management of encryption keys for millions of users will pose a new type of challenge.
While some technologies have been developed, only an industry-wide effort and cooperation can minimize risks and ensure privacy for users, data confidentiality for the financial institutions, and nonrepudiation for electronic commerce.
With the continuing growth in linking individuals and businesses over the Internet, some social issues are starting to surface. The society may take time in adapting to the new concept of transacting business over the Internet. Consumers may take time to trust the network and accept it as a substitute for transacting business in person. Another class of concerns relates to restricting access over the Internet. Preventing distribution of pornography and other objectionable material over the Internet has already been in the news. We can expect new social hurdles over time and hope the great benefits of the Internet will continue to override these hurdles through new technologies and legislations.
The World Wide Web is the single largest, most ubiquitous source of information in the world, and it sprang up spontaneously. People use interactive Web pages to obtain stock quotes, receive tax information from the Internal Revenue Service, make appointments with a hairdresser, consult a pregnancy planner to determine ovulation dates, conduct election polls, register for a conference, search for old friends, and the list goes on. It is only natural that the Web’s functionality, popularity, and ubiquity have made it the seemingly ideal platform for conducting electronic commerce. People can now go online to buy CDs, clothing, concert tickets, and stocks. Several companies, such Digicash, Cybercash, and First Virtual, have sprung up to provide mechanisms for conducting business on the Web. The savings in cost and the convenience of shopping via the Web are incalculable. Whereas most successful computer systems result from careful, methodical planning, followed by hard work, the Web took on a life of its own from the very beginning. The introduction of a common protocol and a friendly graphical user interface was all that was needed to ignite the Internet explosion. The Web’s virtues are extolled without end, but its rapid growth and universal adoption have not been without cost. In particular, security was added as an afterthought.
New capabilities were added ad hoc to satisfy the growing demand for features without carefully considering the impact on security. As general-purpose scripts were introduced on both the client and the server sides, the dangers of accidental and malicious abuse grew. It did not take long for the Web to move from the scientific community to the commercial world. At this point, the security threats became much more serious. The incentive for malicious attackers to exploit vulnerabilities in the underlying technologies is at an all-time high. This is indeed frightening when we consider what attackers of computer systems have accomplished when their only incentive was fun and boosting their egos. When business and profit are at stake, we cannot assume anything less than the most dedicated and resourceful attackers typing their utmost to steal, cheat, and perform malice against users of the Web.
When people use their computers to surf the Web, they have many expectations. They expect to find all sorts of interesting information, they expect to have opportunities to shop and they expect to be bombarded with all sorts of ads. Even people who do not use the Web are in jeopardy of being impersonated on the Web.
There are simple and advanced methods for ensuring browser security and protecting user privacy. The more simple techniques are user certification schemes, which rely on digital Ids. Netscape Communicator Navigator and Internet Explorer allow users to obtain and use personal certificates. Currently, the only company offering such certificates is Verisign, which offers digital Ids that consist of a certificate of a user’s identity, signed by Verisign. There are four classes of digital Ids, each represents a different level of assurance in the identify, and each comes at an increasingly higher cost. The assurance is determined by the effort that goes into identifying the person requesting the certificate.
Class 1 Digital IDs, intended for casual Web browsing, provided users with an unambiguous name and e-mail address within Verisign’s domain. A Class 1 ID provides assurance to the server that the client is using an identity issued by Verisign but little guarantee about the actual person behind the ID.
Class 2 Digital IDs require third party confirmation of name, address, and other personal information related to the user, and they are available only to residents of the United States and Canada. The information provided to Verisign is checked against a consumer database maintained by Equifax. To protect against insiders at Verisign issuing bogus digital IDs, a hardware device is used to generate the certificates.
Class 3 Digital IDs are not available. The purpose is to bind an individual to an organization. Thus, a user in possession of such an ID could, theoretically, prove that he or she belongs to the organization that employs him or her.
The idea behind Digital IDs is that they are entered into the browser and then are automatically sent when users connect to sites requiring personal certificates. Unfortunately, the only practical effect is to make impersonating users on the network only a little bit more difficult.
Many Web sites require their users to register a name and a password. When users connect to these sites, their browser pops up an authentication window that asks for these two items. Usually, the browser than sends the name and password to the server that can allow retrieval of the remaining pages at the site. The authentication information can be protected from eavesdropping and replay by using the SSL protocol.
As the number of sites requiring simple authentication grows, so does the number of passwords that each user must maintain. In fact, users are often required to have several different passwords for systems in their workplace, for personal accounts, for special accounts relating to payroll and vacation, and so on. It is not uncommon for users to have more than six sites they visit that require passwords.
In the early days of networking, firewalls were intended less as security devices than as a means of preventing broken networking software or hardware from crashing wide-area networks. In those days, malformed packets or bogus routes frequently crashed systems and disrupted servers. Desperate network managers installed screening systems to reduce the damage that could happen if a subnet’s routing tables got confused or if a system’s Ethernet card malfunctioned. When companies began connecting to what is now the Internet, firewalls acted as a means of isolating networks to provide security as well as enforce an administrative boundary. Early hackers were not very sophisticated; neither were early firewalls.
Today, firewalls are sold by many vendors and protect tens of thousands of sites. The products are a far cry from the first-generation firewalls, now including fancy graphical user interfaces, intrusion detection systems, and various forms of tamper-proof software. To operate, a firewall sits between the protected network and all external access points. To work effectively, firewalls have to guard all access points into the network’s perimeter otherwise, an attacker can simply go around the firewall and attack an undefended connection.
The simple days of the firewalls ended when the Web exploded. Suddenly, instead of handling only a few simple services in an “us versus them manner”, firewalls now must be connected with complex data and protocols. Today’s firewall has to handle multimedia traffic level, attached downloadable programs (applets) and a host of other protocols plugged into Web browsers. This development has produced a basis conflict: The firewall is in the way of the things users want to do. A second problem has arisen as many sites want to host Web servers: Does the Web server go inside or outside of the firewall? Firewalls are both a blessing and a curse. Presumably, they help deflect attacks. They also complicate users’ lives, make Web server administrators’ jobs harder, rob network performance, add an extra point of failure, cost money, and make networks more complex to manage.
Firewall technologies, like all other Internet technologies, are rapidly changing. There are two main types of firewalls, plus many variations. The main types of firewalls are proxy and network-layer. The idea of a proxy firewall is simple: Rather than have users log into a gateway host and then access the Internet from there, give them a set of restricted programs running on the gateway host and let them talk to those programs, which act as proxies on behalf of the user. The user never has a account or login on the firewall itself, and he or she can interact only with a tightly controlled restricted environment created by the firewall’s administrator.
This approach greatly enhances the security of the firewall itself because it means that users do not have accounts or shell access to the operating system. Most UNIX bugs require that the attacker have a login on the system to exploit them. By throwing the users off the firewall, it becomes just a dedicated platform that does nothing except support a small set of proxies-it is no longer a general-purpose computing environment. The proxies, in turn, are carefully designed to be reliable and secure because they are the only real point of the system against which an attack can be launched.
Proxy firewalls have evolved to the point where today they support a wide range of services and run on a number of different UNIX and Windows NT platforms. Many security experts believe that proxy firewall is more secure than other types of firewalls, largely because the first proxy firewalls were able to apply additional control on to the data traversing the proxy. The real reason for proxy firewalls was their ease of implementation, not their security properties. For security, it does not really matter where in the processing of data the security check is made; what’s more important is that it is made at all. Because they do not allow any direct communication between the protected network and outside world, proxy firewall inherently provide network address translation. Whenever an outside site gets a connection from the firewall’s proxy address, it in turn hides and translates the addresses of system behind the firewall.
Prior to the invention of firewalls, routers were often pressed into service to provide security and network isolation. Many sites connecting to the Internet in the early days relied on ordinary routers to filter the types of traffic allowed into or out of the network. Routers operate on each packet as a unique event unrelated to previous packets, filtered on IP source, IP destination, IP port number, and a f few other basic data contained in the packet header. Filtering, strictly speaking, does not constitute a firewall because it does not have quite enough detailed control over data flow to permit building highly secure connections. The biggest problem with using filtering routers for security is the FTP protocol, which, as part of its specification, makes a callback connection in which the remote system initiates a connection to the client, over which data is transmitted.
Cryptography is at the heart of computer and network security. The important cryptographic functions are encryption, decryption, one-way hashing, and digital signatures. Ciphers are divided into two categories, symmetric and asymmetric, or public-key systems. Symmetric ciphers are functions where the same key is used for encryption and decryption. Public-key systems can be used for encryption, but they are also useful for key agreement and digital signatures. Key-agreement protocols enable two parties to compute a secret key, even in the face of an eavesdropper.
Symmetric ciphers are the most efficient way to encrypt data so that its confidentiality and integrity are preserved. That is, the data remains secret to those who do not posses the secret key, and modifications to the cipher text can be detected during decryption. Two of the most popular symmetric ciphers are the Data Encryption Standard (DES) and the International Data Encryption Algorithm (IDEA). The DES algorithm operates on blocks of 64 bits at a time using a key length of 56 bits. The 64 bits are permuted according to the value of the key, and so encryption with two keys that differently in one bit produces two completely different cipher texts. The most popular mode of DES is called Cipher Block Chaining (CBC) mode, where output from previous block are mixed with the plaintext of each block. The first block is mixed with the plaintext of each block. The block uses a special value called the Initialization Vector.
Despite its size and rapid growth, the Web is still in its infancy. So is the software industry. We are just beginning to learn how to develop secure software, and we are beginning to understand that for our future, if it is to be online, we need to incorporate security into the basic underpinnings of everything we develop.
! |
Как писать рефераты Практические рекомендации по написанию студенческих рефератов. |
! | План реферата Краткий список разделов, отражающий структура и порядок работы над будующим рефератом. |
! | Введение реферата Вводная часть работы, в которой отражается цель и обозначается список задач. |
! | Заключение реферата В заключении подводятся итоги, описывается была ли достигнута поставленная цель, каковы результаты. |
! | Оформление рефератов Методические рекомендации по грамотному оформлению работы по ГОСТ. |
→ | Виды рефератов Какими бывают рефераты по своему назначению и структуре. |
Реферат | Всеволод Мстиславич |
Реферат | 1 Податок на нерухоме майно, відмінне від земельної ділянки (Додаток 1) |
Реферат | Гида |
Реферат | Вышата |
Реферат | Эстетическая теория Кант |
Реферат | Игровая деятельность у животных и людей |
Реферат | Диплом Безработица а примере ЯНАО |
Реферат | Листок-бічна частина пагона Зовнішня будова листка |
Реферат | Революция 1848 1849 годов в Неаполитанском королевстве |
Реферат | Законотворческий процесс |
Реферат | Перспективы развития лизинга в России. Реферат |
Реферат | Легендарний кошовий атаман Іван Сірко |
Реферат | Пластун казак |
Реферат | Глобальные проблемы современности 2 Понятие о |
Реферат | Житие и страдание святой великомученицы Екатерины |