Ads Top

CS707 Network Security Midterm Short Question


What is the difference between active attacks and passive attacks?

ACTIVE ATTACK:
From the word active, it is clear that it is nothing but direct attack. In this case, the attacked one gets aware of the attack. Suppose, someone, installed a logic bomb to your PC & after clicking on it, your PC starts to be shut down.
PASSIVE ATTACK
It is the indirect attack. The attacked host is completely unaware about this, hence it is called a passive attack. Like as, the attacker is trying to observe the host.
It is very hard to say which one is more harmful to active and passive attack. Sometimes active attack and sometimes it may be passive attack also.

2. In the context of hashing what is meant by compression.
Compression must occur before encryption because compression is inefficient on encrypted data: compression algorithms work on detecting redundancies and structure in the data, and encryption is designed to hide redundancies and structure. Basically, compression does not work at all on properly encrypted data. Conversely, if compression works on encrypted data, then the encryption layer should be viewed with deep suspicion...
When hashing occurs in PGP, it is as part of a signature algorithm, or as an integrity check which is generally known as a MAC. There are several ways to do a MAC; the theoretical "good" way is to apply the MAC on the encrypted data. However, PGP dates from an older time where theory was not yet fully worked out, and uses a hash value (i.e. a function which as no key) and then includes the hash in the encrypted data (see section 5.13); the hash value is turned into a MAC by virtue of reusing the encryption key. In the case of such a MAC, the MAC (i.e. the underlying hash) occurs on whatever is encrypted, so that's the compressed data (if compression was used at all). Since you talk about compression "between" the hash and the encryption, then I suppose that you are not talking about that hash at all.
Compressing a sequence of characters drawn from an alphabet uses string substitution with no a priori information. An input data block is processed into an output data block comprised of variable length incompressible data sections and variable length compressed token sections. Multiple hash tables are used based on different subblock sizes for string matching, and this improves the compression ratio and rate of compression. The plurality of uses of the multiple hash tables allows for selection of an appropriate compression data rate and/or compression factor in relation to the input data. Using multiple hashing tables with a recoverable hashing method further improves compression ratio and compression rate. Each incompressible data section contains means to distinguish it from compressed token sections.

3. What is X.509  standard?
PKI is an ISO authentication framework that uses public key cryptography and the X.509 standard.
In cryptography, X.509 is an ITU-T standard for a public key infrastructure (PKI) and Privilege Management Infrastructure (PMI). X.509 specifies, amongst other things, standard formats for public key certificates, certificate revocation lists, attribute certificates, and a certification path validation algorithm.
The standard for how the CA creates the certificate is X.509, which dictates the different fields used in the certificate and the valid values that can populate those fields
We are currently at version 4 of this standard, which is often denoted as X.509v4. Many cryptographic protocols use this type of certificate, including SSL.
The certificate includes the serial number, version number, identity information, algorithm information, lifetime dates, and the signature of the issuing authority.
.4. What are one-way functions? How they are implemented in cryptography?
A one-way function is a mathematical function that is easier to compute in one direction than in the opposite direction.
An analogy of this is when you drop a glass on the floor. Although dropping a glass on the floor is easy, putting all the pieces back together again to reconstruct the original glass is next to impossible.
This concept is similar to how a one-way function is used in cryptography, which is what the RSA algorithm, and all other asymmetric algorithms, is based upon.
The easy direction of computation in the one-way function that is used in the RSA algorithm is the process of multiplying two large prime numbers.

Multiplying the two numbers to get the resulting product is much easier than factoring the product and recovering the two initial large prime numbers used to calculate the obtained product, which is the difficult direction.
RSA is based on the difficulty of factoring large numbers that are the product of two large prime numbers.
Attacks on these types of cryptosystems do not necessarily try every possible key value, but rather try to factor the large number, which will give the attacker the private key.
When a user encrypts a message with a public key, this message is encoded with a one-way function (breaking a glass). This function supplies a trapdoor (knowledge of how to put the glass back together), but the only way the trapdoor can be taken advantage of is; if it is known about and the correct code is applied. The private key provides this service.
The private key knows about the trapdoor, knows how to derive the original prime numbers, and has the necessary programming code to take advantage of this secret trapdoor to unlock the encoded message (reassembling the broken glass). Knowing about the trapdoor and having the correct functionality to take advantage of it are what make the private key private.
When a one-way function is carried out in the easy direction, encryption and digital signature verification functionality are available. When the one-way function is carried out in the hard direction, decryption and signature generation functionality are available.
This means only the public key can carry out encryption and signature verification and only the private key can carry out decryption and signature generation.
Part-II            Long Questions (10 marks each)
1. What security measures are adopted in the layers of OSI model.
The OSI reference model for networking (ISO 7498-1) is designed around seven layers arranged in a stack. The OSI security architecture reference model (ISO 7498-2) is also designed around seven layers, reflecting a high-level view of the different requirements within network security.

Layers (ISO 7498-1)
ISO 7498-2 Security Model
Application
Authentication
Presentation
Access Control
Session
Non-Repudiation
Transport
Data Integrity
Network
Confidentiality
Data Link
Assurance / Availability
Physical
Notarization / Signature

1. What security measures are adopted in the layers of OSI Model.
In the OSI model approach, security is addressed at each layer of the OSI model, shown below. By comparing in depth the OSI model with the concept of Application Security by Defense, IT managers better understand that securing enterprise application is more than authentication, encryption, OS hardening, etc. At each level of the OSI model, there are

Security vulnerabilities and, therefore, security prevention measures that can be taken to ensure that enterprise applications are protected. Importantly, the capability IT managers have to mitigate risks decreases at the higher OSI model layers. 
What security measures are adopted in the layers of OSI Model

One reason IT managers have less power to protect applications at the higher OSI layers is that at these higher layers, developers have much more influence over security measures.
However, security measures are possible at every OSI layer. Addressing security threats at every layer reduces the risk of enterprise application compromise or Denial of Service.
Examples of vulnerabilities and solutions at each layer provide a better understanding of the topics presented.

Risks/Attacks and their Measures

The OSI Physical layer represents physical application security, which includes access control, power, fire, water, and backups. Many of the threats to security at the Physical layer cause a Denial of Service (DoS) of the enterprise application, making the application unavailable to enterprise users.
Physical locks, both on equipment and facilities housing the equipment, are imperative to keep intruders out. In order to use information, one must have access to it. Security cables on laptops and system cases with power button locks are examples of procuring equipment with physical security capabilities.
The Data, or Data Link, a layer of the OSI model encompasses switch security topics such as ARP spoofing, MAC flooding and spanning tree attacks.
Simple configuration changes to the network switch can help protect enterprise applications from Data layer attacks.
The Network and Transport layers of the OSI model are where the most common security
precautions take place— this layer is where routers and firewalls are implemented. Threats that occur at this level are the unauthorized retrieval of endpoint identity, unauthorized access to internal systems, SYN flood attacks and “ping of death.”
Implementing Network Address Translation, Access Control Lists, and firewall technologies mitigate these risks.
The Session and Presentation layers are the lower layers of the Application Set of the OSI model. At these layers, the IT manager’s ability to mitigate application security risk begins to diminish as developers take a bigger role in protecting applications.
IT managers can prevent unauthorized login/password accesses and unauthorized data accesses, which are common attacks at these layers, by using encryption and authentication methods.
The Application layer is the final layer of the Application Set and the OSI model. Many security protection methods are the responsibility of the programmer at this layer. Backdoor attacks occur at this level and it is the programmer’s responsibility to close those doors.
IT managers can use access control methods described to assist in preventing backdoor attacks; also, IT managers can set up tools such as virus scanners, WebInspect, and intrusion detection devices to help prevent compromise of enterprise applications.

2. Define and discuss various components of PKI infrastructure.
The comprehensive system required to provide public-key encryption and digital signature services is known as a public-key infrastructure. The purpose of a public-key infrastructure is to manage keys and certificates. By managing keys and certificates through a PKI, an organization establishes and maintains a trustworthy networking environment. A PKI enables the use of encryption and digital signature services across a wide variety of applications.
A PKI may be made up of the following entities and functions:
• CA(Certificate Authority)
• RA(Registration Authority)
• Certificate repository
• Certificate revocation system
• Key backup and recovery system
• Automatic key update&Management of key histories
• Timestamping
• Client-side software

The detail of each component is as follows:
     1.     CA (Certificate Authority )
A CA is a trusted organization (or server) that maintains and issues digital certificates. When a person requests a certificate, the registration authority (RA) verifies that individual’s identity and passes the certificate request off to the CA.
The CA constructs the certificate, signs it, sends it to the requester, and maintains the certificate over its lifetime.
When another person wants to communicate with this person, the CA will basically vouch for that person’s identity
      2.     RA (Registration authority)
The registration authority (RA) performs the certification registration duties. The RA establishes and confirms the identity of an individual, initiates the certification process with a CA on behalf of an end user, and performs certificate life-cycle management functions.
The RA cannot issue certificates, but can act as a broker between the user and the CA. When users need new certificates, they make requests to the RA, and the RA verifies all necessary identification information before allowing a request to go to the CA.
3.     Certificate repository
Certificate repositories store certificates so that applications can retrieve them on behalf of users. The term repository refers to a network service that allows for distribution of certificates. Over the past few years, the consensus in the information technology industry is that the best technology for certificate repositories is provided by directory systems that are LDAP (Lightweight Directory Access Protocol)-compliant. 
     4.      Certificate revocation system
Certificates that are no longer trustworthy must be revoked by the CA.There are numerous reasons why a certificate may need to be revoked prior to the end of its validity period. For instance, the private key (that is, either the signing key or the decryption key) corresponding to the public key in the certificate may be compromised. Alternatively, an organization’s security policy may dictate that the certificates of employees leaving the organization must be revoked. In these situations, users in the system must be informed that continued use of the certificate is no longer considered secure. The revocation status of a certificate must be checked prior to each use. As a result, a PKI must incorporate a scalable certificate revocation system. The CA must be able to securely publish information regarding the status of each certificate in the system. Application software, on behalf of users, must then verify the revocation information prior to each use of a certificate. The combination of publishing and consistently using certificate revocation information constitutes a complete revocation system.
CRL: The most popular means for distributing certificate revocation information is for the CA to create secure certificate revocation lists (CRLs) and publish these CRLs to a directory system. CRLs specify the unique serial numbers of all revoked certificates. Prior to using a certificate, the client-side application must check the appropriate CRL to determine if the certificate is still trustworthy. Client-side applications must check for revoked certificates consistently and transparently on behalf of users.
     5.      Key backup and recovery system
To ensure users are protected against loss of data, the PKI must support a system for backup and recovery of decryption keys. With respect to administrative costs, it is unacceptable for each application to provide its own key backup and recovery. Instead, all PKI-enabled client applications should interact with a single key backup and recovery system. The interactions between the client-side software and the key backup and recovery system must be secure, and the interaction method must be consistent across all PKI-enabled applications.
     6.     Key update and management of key histories:
Cryptographic key pairs should not be used forever. They must be updated over time. As a result, every organization needs to consider two important issues:
Updating users’ key pairs, and
Maintaining, where appropriate, the history of previous key pairs.
Updating users’ key pairs: The process of updating keys pairs should be transparent to users. This transparency means users do not have to understand that key update needs to take place and they will never experience a “denial of service” because their keys are no longer valid. To ensure transparency and prevent denial of service, users? key pairs must be automatically updated before they expire.
Maintaining histories of key pairs: When encryption key pairs are updated, the history of previous decryption keys must be maintained. This “key history” allows users to access any of their prior decryption keys to decrypt data. (When data is encrypted with a user’s encryption key, only the corresponding decryption key—the paired key—can be used for decrypting). To ensure transparency, the client-side software must automatically manage users? histories of decryption keys.
     7.     Timestamping
Trusted Timestamping is the process of securely keeping track of the creation and modification time of a document. Security here means that no one — not even the owner of the document — should be able to change it once it has been recorded provided that the timestamp's integrity is never compromised.
The administrative aspect involves setting up a publicly available, trusted timestamp management infrastructure to collect, process and renew timestamps
8.     Client-side software
A consistent, easy-to-use PKI implementation within client-side software lowers PKI operating costs. In addition, client-side software must be technologically enabled to support all of the elements of a PKI discussed earlier in this paper. The following list summarizes the requirements client-side software must meet to ensure that users in a business receive a usable, transparent (and thus, acceptable) PKI.
     9.     Support for Non-repudiation
Repudiation occurs when an individual denies involvement in a transaction. (For instance, when someone claims a credit card is stolen, this means that he or she is repudiating liability for transactions that occur with that card anytime after reporting the theft).
Non-repudiation means that an individual cannot successfully deny involvement in a transaction. In the paper-world, individuals’ signatures legally bind them to their transactions (for example, credit card charges, business contracts …). The signature prevents repudiation of those transactions. In the electronic world, the replacement for the pen-based signature is a digital signature. All types of electronic commerce require digital signatures because electronic commerce makes traditional pen-based signatures obsolete.

No comments:

Powered by Blogger.