CISSP
A cold site is a permanent location that provide you with your own space that you can move into in case of a disaster or catastrophe.  It is one of the cheapest solution available as a rental place but it is also the one that would take the most time to recover. A cold site usually takes one to two weeks for recovery.
Although major disruptions with long-term effects may be rare, they should be accounted for in the contingency plan. The plan should include a trategy to recover and perform system operations at an alternate facility for an extended period.  In general, three types of alternate sites are available:
Æ’Æ’
Æ’Æ’
- Dedicated site owned or operated by the organization. Also called redundant or alternate sites;
- Reciprocal agreement or memorandum of agreement with an internal or external entity; and
- Commercially leased facility.
Regardless of the type of alternate site chosen, the facility must be able to support system operations as defined in the contingency plan. The three alternate site types commonly categorized in terms of their operational readiness are cold sites, warm sites, or hot sites. Other variations or combinations of these can be found, but generally all variations retain similar core features found in one of these three site types.Â
Progressing from basic to advanced, the sites are described below:
Cold Sites are typically facilities with adequate space and infrastructure (electric power, telecommunications connections, and environmental controls) to support information system recovery activities. Â
Æ’Warm Sites are partially equipped office spaces that contain some or all of the system hardware, software, telecommunications, and power sources.
Æ’Warm Sites are partially equipped office spaces that contain some or all of the system hardware, software, telecommunications, and power sources.
Hot Sites are facilities appropriately sized to support system requirements and configured with the necessary system hardware, supporting infrastructure, and support personnel.Â
As discussed above, these three alternate site types are the most common. There are also variations, and hybrid mixtures of features from any one of the three. Each organization should evaluate its core requirements in order to establish the most effective solution.Â
As discussed above, these three alternate site types are the most common. There are also variations, and hybrid mixtures of features from any one of the three. Each organization should evaluate its core requirements in order to establish the most effective solution.Â
Two examples of variations to the site types are:
Æ’Mobile Sites are self-contained, transportable shells custom-fitted with specific telecommunications and system equipment necessary to meet system requirements.Â
Æ’Mirrored Sites are fully redundant facilities with automated real-time information mirroring. Mirrored sites are identical to the primary site in all technical respects.Â
There are obvious cost and ready-time differences among the options. In these examples, the mirrored site is the most expensive choice, but it ensures virtually 100 percent availability. Cold sites are the least expensive to maintain, although they may require substantial time to acquire and install necessary equipment. Partially equipped sites, such as warm sites, fall in the middle of the spectrum. In many cases, mobile sites may be delivered to the desired location within 24 hours, but the time necessary for equipment installation and setup can increase this response time. The selection of fixed-site locations should account for the time and mode of transportation necessary to move personnel and/or equipment there. In addition, the fixed site should be in a geographic area that is unlikely to be negatively affected by the same hazard as the organization€™s primary site. Â
Advantages:
Disadvantages:
Disadvantages:
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 63.
The hearsay rule concerns computer-generated evidence, which is considered second-hand evidence.
Example; there are certain parts of the computer that an unprivileged account simply cannot effect like a Windows registry, a Unix /etc/shadow file or a hosts file.
However, with a root or administrator account, malware can do practically anything to that computer because it has few constraints. Since malware can do anything a user can, if the user is using an admin account or worse, an enterprise administrator account on a Microsoft Active Directory it becomes clear that it is absolutely critical that admin accounts NOT be used for everyday worker duties.
Example; there are certain parts of the computer that an unprivileged account simply cannot effect like a Windows registry, a Unix /etc/shadow file or a hosts file.
However, with a root or administrator account, malware can do practically anything to that computer because it has few constraints. Since malware can do anything a user can, if the user is using an admin account or worse, an enterprise administrator account on a Microsoft Active Directory it becomes clear that it is absolutely critical that admin accounts NOT be used for everyday worker duties.
Æ’Mobile Sites are self-contained, transportable shells custom-fitted with specific telecommunications and system equipment necessary to meet system requirements.Â
Æ’Mirrored Sites are fully redundant facilities with automated real-time information mirroring. Mirrored sites are identical to the primary site in all technical respects.Â
There are obvious cost and ready-time differences among the options. In these examples, the mirrored site is the most expensive choice, but it ensures virtually 100 percent availability. Cold sites are the least expensive to maintain, although they may require substantial time to acquire and install necessary equipment. Partially equipped sites, such as warm sites, fall in the middle of the spectrum. In many cases, mobile sites may be delivered to the desired location within 24 hours, but the time necessary for equipment installation and setup can increase this response time. The selection of fixed-site locations should account for the time and mode of transportation necessary to move personnel and/or equipment there. In addition, the fixed site should be in a geographic area that is unlikely to be negatively affected by the same hazard as the organization€™s primary site. Â
nternal Hot Site€â€This site is standby ready with all the technology and equipment necessary to run the applications positioned there. The planner will be able to effectively restart an application in a hot site recovery without having to perform any bare metal recovery of servers. If this is an internal solution, then often the organization will run non-time sensitive processes there such as development or test environments, which will be pushed aside for recovery of production when needed. When employing this strategy, it is important that the two environments be kept as close to identical as possible to avoid problems with O/S levels, hardware differences, capacity differences, etc., from preventing or delaying recovery.Â
Recovery Site Strategies Depending on how much downtime an organization has before the technology recovery must be complete, recovery strategies selected for the technology environment could be any one of the following:
– Dual Data Center€â€This strategy is employed for applications, which cannot accept any downtime without negatively impacting the organization. The applications are split between two geographically dispersed data centers and either load balanced between the two centers or hot swapped between the two centers. The surviving data center must have enough head room to carry the full production load in either case.
– External Hot Site€â€This strategy has equipment on the floor waiting, but the environment must be rebuilt for the recovery. These are services contracted through a recovery service provider. Again, it is important that the two environments be kept as close to identical as possible to avoid problems with O/S levels, hardware differences, capacity differences, etc., from preventing or delaying recovery. Hot site vendors tend to have the most commonly used hardware and software products to attract the largest number of customers to utilize the site. Unique equipment or software would generally need to be provided by the organization either at time of disaster or stored there ahead of time.
– Warm Site€â€A leased or rented facility that is usually partially configured with some equipment, but not the actual computers. It will generally have all the cooling, cabling, and networks in place to accommodate the recovery but the actual servers, mainframe, etc., equipment are delivered to the site at time of disaster.
– Cold Site€â€A cold site is a shell or empty data center space with no technology on the floor. All technology must be purchased or acquired at the time of disaster.
The Differential Backup Method only copies files that have changed since a full backup backup was last performed.
One of the key item to understand regarding backup is the archive bit. The archive bit is used to determine what files have been backuped already. The archive bit is set if a file is modified or a new file is created, this indicates to the backup program that it has to be saved on the next backup. When a full backup is performed the archive bit will be cleared indicating that the files were backup. This allows backup programs to do an incremental or differential backup that only backs up the changes to the filesystem since the last time the bit was cleared
Full Backup (or Reference Backup)
A Full backup will backup all the files and folders on the drive every time you run the full backup. The archive bit is cleared on all files indicating they were all backuped.Advantages:
€¢ | All files from the selected drives and folders are backed up to one backup set. |
€¢ | In the event you need to restore files, they are easily restored from the single backup set. |
Disadvantages:
€¢ | A full backup is more time consuming than other backup options. |
€¢ | Full backups require more disk, tape, or network drive space. |
Incremental Backup
An incremental backup provides a backup of files that have changed or are new since the last incremental backup.
For the first incremental backup, all files in the file set are backed up (just as in a full backup). If you use the same file set to perform a incremental backup later, only the files that have changed are backed up. If you use the same file set for a third backup, only the files that have changed since the second backup are backed up, and so on.
Incremental backup will clear the archive bit.
Advantages:€¢ | Backup time is faster than full backups. |
€¢ | Incremental backups require less disk, tape, or network drive space. |
€¢ | You can keep several versions of the same files on different backup sets. |
€¢ | In order to restore all the files, you must have all of the incremental backups available. |
€¢ | It may take longer to restore a specific file since you must search more than one backup set to find the latest version of a file. |
Differential Backup
A differential backup provides a backup of files that have changed since a full backup was performed. A differential backup typically saves only the files that are different or new since the last full backup. Together, a full backup and a differential backup include all the files on your computer, changed and unchanged.
Differential backup do not clear the archive bits.
Advantages:
Advantages:
€¢ | Differential backups require even less disk, tape, or network drive space than incremental backups. |
€¢ | Backup time is faster than full or incremental backups. |
€¢ | Restoring all your files may take considerably longer since you may have to restore both the last differential and full backup. |
€¢ | Restoring an individual file may take longer since you have to locate the file on either the differential or full backup. |
For more info see: http://support.microsoft.com/kb/136621
Data diddling
It involves changing data before , or as it is entered into the computer or in other words , it refers to the alteration of the existing data.
The other answers are incorrect because :
Salami techniques : A salami attack is the one in which an attacker commits several small crimes with the hope that the overall larger crime will go unnoticed.
Trojan horses : A Trojan Horse is a program that is disguised as another program.
Viruses :A Virus is a small application , or a string of code , that infects applications.
Behavior-based IDS
Knowledge-based IDS are more common than behavior-based ID systems.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 63.
Application-Based IDS - "a subset of HIDS that analyze what's going on in an application using the transaction log files of the application." Source: Official ISC2 CISSP CBK Review Seminar Student Manual Version 7.0 p. 87
Host-Based IDS - "an implementation of IDS capabilities at the host level. Its most significant difference from NIDS is intrusion detection analysis, and related processes are limited to the boundaries of the host." Source: Official ISC2 Guide to the CISSP CBK - p. 197
Network-Based IDS - "a network device, or dedicated system attached to the network, that monitors traffic traversing the network segment for which it is integrated." Source: Official ISC2 Guide to the CISSP CBK - p. 196
CISSP for dummies a book that we recommend for a quick overview of the 10 domains has nice and concise coverage of the subject:
Intrusion detection is defined as real-time monitoring and analysis of network activity and data for potential vulnerabilities and attacks in progress. One major limitation of current intrusion detection system (IDS) technologies is the requirement to filter false alarms lest the operator (system or security administrator) be overwhelmed with data. IDSes are classified in many different ways, including active and passive, network-based and host-based, and knowledge-based and behavior-based:
Active and passive IDS
An active IDS (now more commonly known as an intrusion prevention system — IPS) is a system that's configured to automatically block suspected attacks in progress without any intervention required by an operator. IPS has the advantage of providing real-time corrective action in response to an attack but has many disadvantages as well. An IPS must be placed in-line along a network boundary; thus, the IPS itself is susceptible to attack. Also, if false alarms and legitimate traffic haven't been properly identified and filtered, authorized users and applications may be improperly denied access. Finally, the IPS itself may be used to effect a Denial of Service (DoS) attack by intentionally flooding the system with alarms that cause it to block connections until no connections or bandwidth are available.
A passive IDS is a system that's configured only to monitor and analyze network traffic activity and alert an operator to potential vulnerabilities and attacks. It isn't capable of performing any protective or corrective functions on its own. The major advantages of passive IDSes are that these systems can be easily and rapidly deployed and are not normally susceptible to attack themselves.
Network-based and host-based IDS
A network-based IDS usually consists of a network appliance (or sensor) with a Network Interface Card (NIC) operating in promiscuous mode and a separate management interface. The IDS is placed along a network segment or boundary and monitors all traffic on that segment.
A host-based IDS requires small programs (or agents) to be installed on individual systems to be monitored. The agents monitor the operating system and write data to log files and/or trigger alarms. A host-based IDS can only monitor the individual host systems on which the agents are installed; it doesn't monitor the entire network.
Knowledge-based and behavior-based IDS
A knowledge-based (or signature-based) IDS references a database of previous attack profiles and known system vulnerabilities to identify active intrusion attempts. Knowledge-based IDS is currently more common than behavior-based IDS.
Advantages of knowledge-based systems include the following:
- It has lower false alarm rates than behavior-based IDS.
- Alarms are more standardized and more easily understood than behavior-based IDS.
Disadvantages of knowledge-based systems include these:
- Signature database must be continually updated and maintained.
- New, unique, or original attacks may not be detected or may be improperly classified.
A behavior-based (or statistical anomaly–based) IDS references a baseline or learned pattern of normal system activity to identify active intrusion attempts. Deviations from this baseline or pattern cause an alarm to be triggered.
Advantages of behavior-based systems include that they
- Dynamically adapt to new, unique, or original attacks.
- Are less dependent on identifying specific operating system vulnerabilities.
Disadvantages of behavior-based systems include
- Higher false alarm rates than knowledge-based IDSes.
- Usage patterns that may change often and may not be static enough to implement an effective behavior-based IDS.
RAID Level 1
RAID Level 1 often implemented by a one-for-one disk to disk ratio.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 65.
The following reference(s) were used to create this question:
Official (ISC)2 Guide to the CISSP CBK, Fourth Edition ((ISC)2 Press) kindle location 22532
HARRIS, Shon, All-In-One CISSP Certification Exam Guide, McGraw-Hill/Osborne, 2002, chapter 7: Telecommunications and Network Security (page 480).
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 65.
The following reference(s) were used to create this question:
Official (ISC)2 Guide to the CISSP CBK, Fourth Edition ((ISC)2 Press) kindle location 22532
HARRIS, Shon, All-In-One CISSP Certification Exam Guide, McGraw-Hill/Osborne, 2002, chapter 7: Telecommunications and Network Security (page 480).
You can find additional information on RAID 0 here: http://www.acnc.com/04_01_00.html
You can find additional information on RAID 1 here: http://www.acnc.com/04_01_01.html
You can find additional information on RAID 2 here: http://www.acnc.com/04_01_02.html
You can find additional information on RAID 5 here: http://www.acnc.com/04_01_05.html
More information on RAID can be found at Wikipedia - http://en.wikipedia.org/wiki/RAID
See also: "This level duplicates all disk writes from one disk to another to create two identical drives. This technique is also known as data mirroring. Redundancy is provided at this level" Source: Official ISC2 Guide to the CISSP CBK. p. 657
=============================
RAID Level 0 - "Writes files in stripes across multiple disks without the use of parity informaiton. This technique allows for fast reading and writing to disk. However, without parity information, it is not possible to recover from a hard drive failure." Source: Official ISC2 Guide to the CISSP CBK. p. 657
=============================
RAID Level 2 - "Data is spread across multiple disks at the bit level using this technique. Redundancy information is computed using a Hammering error correction code, which is the same technique used within hard drives and error-correcting memory modules." Source: Official ISC2 guide to the CISSP CBK p.657-658
=============================
RAID Level 5 - "This level requires three or more drives to implement. Data and parity information is striped together across all drives. This level is the most popular and can tolerate the loss of any one drive." Source: Official ISC2 Guide to the CISSP CBK p. 658
To improve system performance
This question is asking what the primary focus of RAID 0 is.
Â
RAID overview €“ A common way that fault tolerance and system resilience is added for computers is with a redundant array of disks (RAID) array. A RAID array includes two or more disks, and most RAID configurations will continue to operate even after one of the disks fails. Some of the common RAID configurations are as follows:
Â
RAID-0 This is also called striping. It uses two or more disks and improves the disk subsystem performance, but it does not provide fault tolerance. RAID 0 configuration is really focused on performance since the blocks are basically striped across multiple disks. Reading from a RAID-0 group is also very fast. A read request comes in and the RAID controller, which controls the placement of data, knows that it can read A0 and A1 at the same time since they are on separate disks, basically doubling the potential read performance relative to a single disk
Â
RAID-1 This is also called mirroring. It uses two disks, which both hold the same data. If one disk fails, the other disk includes the data so a system can continue to operate after a single disk fails. Depending on the hardware used and which drive fails, the system may be able to continue to operate without intervention, or the system may need to be manually
configured to use the drive that didn€™t fail.
Â
RAID-5 This is also called striping with parity. It uses three or more disks with the equivalent of one disk holding parity information. If any single disk fails, the RAID array will continue to operate, though it will be slower.
Â
RAID-10 This is also known as RAID 1 + 0 or a stripe of mirrors, and is configured as two or more mirrors (RAID-1) configured in a striped (RAID-0) configuration. It uses at least four disks but can support more as long as an even number of disks are added. It will continue to operate even if multiple disks fail, as long as at least one drive in each mirror continues to function. For example, if it had three mirrored sets (called M1, M2, and M3 for this example) it would have a total of six disks. If one drive in M1, one in M2, and one in M3 all failed, the array would continue to operate. However, if two drives in any of the mirrors failed, such as both drives in M1, the entire array would fail.
Note: Fault-tolerant version of RAID (1, 5, 6, and so on) -most concerned with data availability and is a way to make data fault-tolerant - RAID 5 will have minimal downtime if data failure occurs; RAID 1 (mirroring) should have a zero downtime if data failure occurs. Confidentiality can be achieved with encryption. Integrity can be brought about by way of hashing.
To improve system performance
This question is asking what the primary focus of RAID 0 is.
Â
RAID overview €“ A common way that fault tolerance and system resilience is added for computers is with a redundant array of disks (RAID) array. A RAID array includes two or more disks, and most RAID configurations will continue to operate even after one of the disks fails. Some of the common RAID configurations are as follows:
Â
RAID-0 This is also called striping. It uses two or more disks and improves the disk subsystem performance, but it does not provide fault tolerance. RAID 0 configuration is really focused on performance since the blocks are basically striped across multiple disks. Reading from a RAID-0 group is also very fast. A read request comes in and the RAID controller, which controls the placement of data, knows that it can read A0 and A1 at the same time since they are on separate disks, basically doubling the potential read performance relative to a single disk
Â
RAID-1 This is also called mirroring. It uses two disks, which both hold the same data. If one disk fails, the other disk includes the data so a system can continue to operate after a single disk fails. Depending on the hardware used and which drive fails, the system may be able to continue to operate without intervention, or the system may need to be manually
configured to use the drive that didn€™t fail.
Â
RAID-5 This is also called striping with parity. It uses three or more disks with the equivalent of one disk holding parity information. If any single disk fails, the RAID array will continue to operate, though it will be slower.
Â
RAID-10 This is also known as RAID 1 + 0 or a stripe of mirrors, and is configured as two or more mirrors (RAID-1) configured in a striped (RAID-0) configuration. It uses at least four disks but can support more as long as an even number of disks are added. It will continue to operate even if multiple disks fail, as long as at least one drive in each mirror continues to function. For example, if it had three mirrored sets (called M1, M2, and M3 for this example) it would have a total of six disks. If one drive in M1, one in M2, and one in M3 all failed, the array would continue to operate. However, if two drives in any of the mirrors failed, such as both drives in M1, the entire array would fail.
Note: Fault-tolerant version of RAID (1, 5, 6, and so on) -most concerned with data availability and is a way to make data fault-tolerant - RAID 5 will have minimal downtime if data failure occurs; RAID 1 (mirroring) should have a zero downtime if data failure occurs. Confidentiality can be achieved with encryption. Integrity can be brought about by way of hashing.
Host-Based Firewall
Host-Based Firewalls reside ON the users' computer and tries to defend it from attack from external sources.
It also can restrict traffic leaving the computer which defends the rest of the local network or even the internet from malicious traffic leaving from the local computer.
There are variations of the Host-Based Firewall, notably HBSS - Host-Based Security Systems which include both the Firewall AND a HIPS - Host Intrusion Prevention System to further protect the computer.
It also can restrict traffic leaving the computer which defends the rest of the local network or even the internet from malicious traffic leaving from the local computer.
There are variations of the Host-Based Firewall, notably HBSS - Host-Based Security Systems which include both the Firewall AND a HIPS - Host Intrusion Prevention System to further protect the computer.
- Network Firewall: This isn't the correct answer because a network firewall defends an entire network against attack, not just a single user's computer. Network Firewalls inspect all traffic crossing from an untrusted network into a trusted network and can permit/deny traffic based upon its configuration. It can also normalize network packets to avoid dangerous conditions.
- HIDS - Host-Based IDS: A Host-Based IDS is not a firewall. The HIDS usually just warns on traffic but doesn't normally take action to block traffic. Certain HBSS - Host-Based Security Systems combine a Firewall with an Host IPS - Intrusion Prevention System.
- NIDS - Network IDS: A Network IDS or NIDS generally only alerts on problems but doesn't take action. Some argue that IDS devices can't effectively take action by suppressing a TCP conversation with RST packets. However, since Snort 2.X, that IDS was able to be configured to take proactive actions to automatically counter attacks by running pre-configure scripts if a snort signature was tripped. This blurs the lines between IDS Detection and IPS Prevention. Either way, in this question it's generally run from a users' computer.
Hackers are classified as a human threat and not a classification by itself.
All the other answers are incorrect. Threats result from a variety of factors, although they are classified in three types: Natural (e.g., hurricane, tornado, flood and fire), human (e.g. operator error, sabotage, malicious code) or technological (e.g. equipment failure, software error, telecommunications network outage, electric power failure).
Capacitance detectors
Capacitance detectors monitor an electrical field surrounding the object being monitored. They are used for spot protection within a few inches of the object, rather than for overall room security monitoring used by wave detectors. Penetration of this field changes the electrical capacitance of the field enough to generate and alarm. Wave pattern motion detectors generate a frequency wave pattern and send an alarm if the pattern is disturbed as it is reflected back to its receiver. Field-powered devices are a type of personnel access control devices. Audio detectors simply monitor a room for any abnormal sound wave generation and trigger an alarm.
Disaster Recovery should never be considered a discretionary expense. It is far too important a task. In order to maintain the continuity of the business Disaster Recovery should be a commitment of and by the organization.
A discretionary fixed cost has a short future planning horizon€â€under a year. These types of costs arise from annual decisions of management to spend in specific fixed cost areas, such as marketing and research. DR would be an ongoing long term committment not a short term effort only.
A committed fixed cost has a long future planning horizon۠more than on year. These types of costs relate to a company۪s investment in assets such as facilities and equipment. Once such costs have been incurred, the company is required to make future payments.
The following answers are incorrect:
committed expense. Is incorrect because Disaster Recovery should be a committed expense.
enforcement of legal statutes. Is incorrect because Disaster Recovery can include enforcement of legal statutes. Many organizations have legal requirements toward Disaster Recovery.
compliance with regulations. Is incorrect because Disaster Recovery often means compliance with regulations. Many financial institutions have regulations requiring Disaster Recovery Plans and Procedures.
The system is optimized prior to the addition of security. Is incorrect because if you wait to implement security after a system is completed the cost of adding security increases dramtically and can become much more complex.
The system is procured off-the-shelf. Is incorrect because it is often difficult to add security to off-the shelf systems.
The system is customized to meet the specific security threat. Is incorrect because this is a distractor. This implies only a single threat.
three types of alternate sites are available:
Æ’Æ’
Æ’Æ’
- Dedicated site owned or operated by the organization. Also called redundant or alternate sites;
- Reciprocal agreement or memorandum of agreement with an internal or external entity; and
- Commercially leased facility.
Daily backup method
A daily backup is not a backup method, but defines periodicity at which backups are made. There can be daily full, incremental or differential backups.
Capacitance detectors
Capacitance detectors monitor an electrical field surrounding the object being monitored. They are used for spot protection within a few inches of the object, rather than for overall room security monitoring used by wave detectors. Penetration of this field changes the electrical capacitance of the field enough to generate and alarm. Wave pattern motion detectors generate a frequency wave pattern and send an alarm if the pattern is disturbed as it is reflected back to its receiver. Field-powered devices are a type of personnel access control devices. Audio detectors simply monitor a room for any abnormal sound wave generation and trigger an alarm.
The Exclusionary Rule
The exclusionary rule mentions that evidence must be gathered legally or it can't be used.
The principle based on federal Constitutional Law that evidence illegally seized by law enforcement officers in violation of a suspect's right to be free from unreasonable searches and seizures cannot be used against the suspect in a criminal prosecution.
The exclusionary rule is designed to exclude evidence obtained in violation of a criminal defendant's Fourth Amendment rights. The Fourth Amendment protects against unreasonable searches and seizures by law enforcement personnel. If the search of a criminal suspect is unreasonable, the evidence obtained in the search will be excluded from trial.
The exclusionary rule is a court-made rule. This means that it was created not in statutes passed by legislative bodies but rather by the U.S. Supreme Court. The exclusionary rule applies in federal courts by virtue of the Fourth Amendment. The Court has ruled that it applies in state courts although the due process clause of the Fourteenth Amendment.(The Bill of Rights—the first ten amendments— applies to actions by the federal government. The Fourteenth Amendment, the Court has held, makes most of the protections in the Bill of Rights applicable to actions by the states.)
The exclusionary rule has been in existence since the early 1900s. Before the rule was fashioned, any evidence was admissible in a criminal trial if the judge found the evidence to be relevant. The manner in which the evidence had been seized was not an issue. This began to change in 1914, when the U.S. Supreme Court devised a way to enforce the Fourth Amendment. In Weeks v. United States, 232 U.S. 383, 34 S. Ct. 341, 58 L. Ed. 652 (1914), a federal agent had conducted a warrantless search for evidence of gambling at the home of Fremont Weeks. The evidence seized in the search was used at trial, and Weeks was convicted. On appeal, the Court held that the Fourth Amendment barred the use of evidence secured through a warrantless search. Weeks's conviction was reversed, and thus was born the exclusionary rule.
The best evidence rule concerns limiting potential for alteration. The best evidence rule is a common law rule of evidence which can be traced back at least as far as the 18th century. In Omychund v Barker (1745) 1 Atk, 21, 49; 26 ER 15, 33, Lord Harwicke stated that no evidence was admissible unless it was "the best that the nature of the case will allow". The general rule is that secondary evidence, such as a copy or facsimile, will be not admissible if an original document exists, and is not unavailable due to destruction or other circumstances indicating unavailability.
The rationale for the best evidence rule can be understood from the context in which it arose: in the eighteenth century a copy was usually made by hand by a clerk (or even a litigant). The best evidence rule was predicated on the assumption that, if the original was not produced, there was a significant chance of error or fraud in relying on such a copy.
The hearsay rule concerns computer-generated evidence, which is considered second-hand evidence.
Hearsay is information gathered by one person from another concerning some event, condition, or thing of which the first person had no direct experience. When submitted as evidence, such statements are called hearsay evidence. As a legal term, "hearsay" can also have the narrower meaning of the use of such information as evidence to prove the truth of what is asserted. Such use of "hearsay evidence" in court is generally not allowed. This prohibition is called the hearsay rule.
For example, a witness says "Susan told me Tom was in town". Since the witness did not see Tom in town, the statement would be hearsay evidence to the fact that Tom was in town, and not admissible. However, it would be admissible as evidence that Susan said Tom was in town, and on the issue of her knowledge of whether he was in town.
Hearsay evidence has many exception rules. For the purpose of the exam you must be familiar with the business records exception rule to the Hearsay Evidence. The business records created during the ordinary course of business are considered reliable and can usually be brought in under this exception if the proper foundation is laid when the records are introduced into evidence. Depending on which jurisdiction the case is in, either the records custodian or someone with knowledge of the records must lay a foundation for the records. Logs that are collected as part of a document business process being carried at regular interval would fall under this exception. They could be presented in court and not be considered Hearsay.
Investigation rule is a detractor.
Digital watermarking
RFC 2828 (Internet Security Glossary) defines digital watermarking as computing techniques for inseparably embedding unobtrusive marks or labels as bits in digital data-text, graphics, images, video, or audio and for detecting or extracting the marks later.
The set of embedded bits (the digital watermark) is sometimes hidden, usually imperceptible, and always intended to be unobtrusive. It is used as a measure to protect intellectual property rights.
RFC 2828 (Internet Security Glossary) defines digital watermarking as computing techniques for inseparably embedding unobtrusive marks or labels as bits in digital data-text, graphics, images, video, or audio and for detecting or extracting the marks later.
The set of embedded bits (the digital watermark) is sometimes hidden, usually imperceptible, and always intended to be unobtrusive. It is used as a measure to protect intellectual property rights.
Steganography involves hiding the very existence of a message.
A digital signature is a value computed with a cryptographic algorithm and appended to a data object in such a way that any recipient of the data can use the signature to verify the data's origin and integrity.
A digital envelope is a combination of encrypted data and its encryption key in an encrypted form that has been prepared for use of the recipient.
A digital signature is a value computed with a cryptographic algorithm and appended to a data object in such a way that any recipient of the data can use the signature to verify the data's origin and integrity.
A digital envelope is a combination of encrypted data and its encryption key in an encrypted form that has been prepared for use of the recipient.
Exclusionary rule
The exclusionary rule is designed to exclude evidence obtained in violation of a criminal defendant's Fourth Amendment rights. The Fourth Amendment protects against unreasonable searches and seizures by law enforcement personnel. If the search of a criminal suspect is unreasonable, the evidence obtained in the search will be excluded from trial.
The exclusionary rule is a court-made rule. This means that it was created not in statutes passed by legislative bodies but rather by the U.S. Supreme Court. The exclusionary rule applies in federal courts by virtue of the Fourth Amendment. The Court has ruled that it applies in state courts although the DUE PROCESS CLAUSE of the Fourteenth Amendment.(The Bill of Rights—the first ten amendments— applies to actions by the federal government. The Fourteenth Amendment, the Court has held, makes most of the protections in the Bill of Rights applicable to actions by the states.)
Best evidence rule
The best evidence rule concerns limiting potential for alteration. The best evidence rule is a common law rule of evidence which can be traced back at least as far as the 18th century. In Omychund v Barker (1745) 1 Atk, 21, 49; 26 ER 15, 33, Lord Harwicke stated that no evidence was admissible unless it was "the best that the nature of the case will allow". The general rule is that secondary evidence, such as a copy or facsimile, will be not admissible if an original document exists, and is not unavailable due to destruction or other circumstances indicating unavailability.
hearsay evidence
Hearsay is information gathered by one person from another concerning some event, condition, or thing of which the first person had no direct experience. When submitted as evidence, such statements are called hearsay evidence. As a legal term, "hearsay" can also have the narrower meaning of the use of such information as evidence to prove the truth of what is asserted. Such use of "hearsay evidence" in court is generally not allowed. This prohibition is called the hearsay rule.
NOT Preventive operational control
Conducting security awareness and technical training to ensure that end users and system users are aware of the rules of behaviour and their responsibilities in protecting the organization's mission is an example of a preventive management control, therefore not an operational control
If you intend on prosecuting an intruder, evidence has to be collected in a lawful manner and, most importantly, protected through a secure chain-of-custody procedure that tracks who has been involved in handling the evidence and where it has been stored. All other choices are all important points, but not the best answer, since no prosecution is possible without a proper, provable chain of custody of evidence.
Two concepts that are at the heart of dealing effectively with digital/electronic evidence, or any evidence for that matter, are the chain of custody and authenticity/integrity.
The chain of custody refers to the who, what, when, where, and how the evidence was handled€â€from its identification through its entire life cycle, which ends with destruction or permanent archiving.
Any break in this chain can cast doubt on the integrity of the evidence and on the professionalism of those directly involved in either the investigation or the collection and handling of the evidence. The chain of custody requires following a formal process that is well documented and forms part of a standard operating procedure that is used in all cases, no exceptions.
Is the operating system configured to prevent circumvention of the security software and application controls?
Physical security and environmental security are part of operational controls, and are measures taken to protect systems, buildings, and related supporting infrastructures against threats associated with their physical environment. All the questions above are useful in assessing physical access controls except for the one regarding operating system configuration, which is a logical access control.
Vibration sensors are similar and are also implemented to detect forced entry. Financial institutions may choose to implement these types of sensors on exterior walls, where bank robbers may attempt to drive a vehicle through. They are also commonly used around the ceiling and flooring of vaults to detect someone trying to make an unauthorized bank withdrawal.
Such sensors are proned to false positive. If there is a large truck with heavy equipment driving by it may trigger the sensor. The same with a storm with thunder and lighting, it may trigger the alarm even thou there are no adversarial threat or disturbance.
First you have to realize that the question is specifically talking about a CDROM. The information stored on a CDROM is not in electro magnetic format, so a degausser woud be inneffective.
You cannot sanitize a CDROM but you might be able to sanitize a RW/CDROM. A CDROM is a write once device and cannot be overwritten like a hard disk or other magnetic device.
Physical Damage would not be enough as information could still be extracted in a lab from the undamaged portion of the media or even from the pieces after the physical damage has been done.
Physical Destruction using a shredder, your microwave oven, melting it, would be very effective and the best choice for a non magnetic media such as a CDROM.
Fault tolerance countermeasures are designed to combat threats to design reliability. Tolerance and Reliability are almost synonymous, this was a good indication of the best choice. Reliability tools are tools such as fail over mechanism, load balancer, clustering tools, etc...
NonEssential 30 Days
Normal 7 Days
Important 72 Hours
Urgent 24 Hours
Critical Minutes to hours
"Trusted paths provide trustworthy interfaces into privledged user functions and are intended to provide a way to ensure that any communications over that path cannot be intercepted or corrupted."
Fail soft
A system that experience a security issue would disable only the portion of the system being affected by the issue. The rest of the system would continue to function as expected. The component or service that failed would be isolated or protected from being abused.
Fail Safe
A fail-safe lock in the PHYSICAL security context will default to being unlocked in case of a power interruption.
A fail-safe mechanisms in the LOGICAL security context will default to being locked in case of problems or issues. For example if you have a firewall and it cannot apply the policy properly, it will default to NO access and all will be locked not allowing any packet to flow through without being inspected.
Fail open
A Fail Open mean that the mechanism will default to being unlocked in case of a failure or problem. This is very insecure. If you have a door access control mechanism that fail open then it means that the door would be unlocked and anyone could get through. A logical security mechanism would grant access and there would be no access control in place.
Fail closed
A Fail closed mechanism will default to being locked in case of a failure or problem. That would be a lot more secure than Fail Open for a logical access control mechanism.
Fail secure
A fail-secure in the logical or physical security context will default to being locked in case of a power interruption or a service that is not functioning properly. Nobody could exit the building and nobody would be able to come in either. In case of the logical context there is no access granted and everything is locked.
When an IDS/IPS inspects a packet it looks into the data field to compare the traffic against a signature database and can trigger on matches but if the network traffic is encrypted no match is ever detected and threats can go unmitigated.Unrestricted SSL Traffic subjects your network to risk because the IDS/IPS or Firewalls can't look into packets to see what's going on. One way to address the threat is to only allow SSL traffic to known sites which require the security of SSL like bank sites or other sites. As usual, this would increase security but reduce usability and require more configuration.
Software updates, patches or service packs are all vital to a secure network. However secure (or insecure) software is when it is released, bugs or vulnerabilities are often times found. At that point software vendors must release patches to close the vulnerabilities.
While it is vital to apply these updates it is more important to test them prior to deployment because they could cause damage to your systems.
Essentially, you keep a test lab with computers similar to the ones in your organization and its software. Install the patches on the test computers to see how and if they are as functional afterwards.
This way, you can identify problematic updates or request the vendor to change the patch to support your particular systems.
The exclusionary rule is designed to exclude evidence obtained in violation of a criminal defendant's Fourth Amendment rights. The Fourth Amendment protects against unreasonable searches and seizures by law enforcement personnel. If the search of a criminal suspect is unreasonable, the evidence obtained in the search will be excluded from trial.
The exclusionary rule is a court-made rule. This means that it was created not in statutes passed by legislative bodies but rather by the U.S. Supreme Court. The exclusionary rule applies in federal courts by virtue of the Fourth Amendment. The Court has ruled that it applies in state courts although the DUE PROCESS CLAUSE of the Fourteenth Amendment.(The Bill of Rights—the first ten amendments— applies to actions by the federal government. The Fourteenth Amendment, the Court has held, makes most of the protections in the Bill of Rights applicable to actions by the states.)
Best evidence rule
The best evidence rule concerns limiting potential for alteration. The best evidence rule is a common law rule of evidence which can be traced back at least as far as the 18th century. In Omychund v Barker (1745) 1 Atk, 21, 49; 26 ER 15, 33, Lord Harwicke stated that no evidence was admissible unless it was "the best that the nature of the case will allow". The general rule is that secondary evidence, such as a copy or facsimile, will be not admissible if an original document exists, and is not unavailable due to destruction or other circumstances indicating unavailability.
hearsay evidence
Hearsay is information gathered by one person from another concerning some event, condition, or thing of which the first person had no direct experience. When submitted as evidence, such statements are called hearsay evidence. As a legal term, "hearsay" can also have the narrower meaning of the use of such information as evidence to prove the truth of what is asserted. Such use of "hearsay evidence" in court is generally not allowed. This prohibition is called the hearsay rule.
NOT Preventive operational control
Conducting security awareness and technical training to ensure that end users and system users are aware of the rules of behaviour and their responsibilities in protecting the organization's mission is an example of a preventive management control, therefore not an operational control
If you intend on prosecuting an intruder, evidence has to be collected in a lawful manner and, most importantly, protected through a secure chain-of-custody procedure that tracks who has been involved in handling the evidence and where it has been stored. All other choices are all important points, but not the best answer, since no prosecution is possible without a proper, provable chain of custody of evidence.
Two concepts that are at the heart of dealing effectively with digital/electronic evidence, or any evidence for that matter, are the chain of custody and authenticity/integrity.
The chain of custody refers to the who, what, when, where, and how the evidence was handled€â€from its identification through its entire life cycle, which ends with destruction or permanent archiving.
Any break in this chain can cast doubt on the integrity of the evidence and on the professionalism of those directly involved in either the investigation or the collection and handling of the evidence. The chain of custody requires following a formal process that is well documented and forms part of a standard operating procedure that is used in all cases, no exceptions.
Is the operating system configured to prevent circumvention of the security software and application controls?
Physical security and environmental security are part of operational controls, and are measures taken to protect systems, buildings, and related supporting infrastructures against threats associated with their physical environment. All the questions above are useful in assessing physical access controls except for the one regarding operating system configuration, which is a logical access control.
Vibration sensors are similar and are also implemented to detect forced entry. Financial institutions may choose to implement these types of sensors on exterior walls, where bank robbers may attempt to drive a vehicle through. They are also commonly used around the ceiling and flooring of vaults to detect someone trying to make an unauthorized bank withdrawal.
Such sensors are proned to false positive. If there is a large truck with heavy equipment driving by it may trigger the sensor. The same with a storm with thunder and lighting, it may trigger the alarm even thou there are no adversarial threat or disturbance.
First you have to realize that the question is specifically talking about a CDROM. The information stored on a CDROM is not in electro magnetic format, so a degausser woud be inneffective.
You cannot sanitize a CDROM but you might be able to sanitize a RW/CDROM. A CDROM is a write once device and cannot be overwritten like a hard disk or other magnetic device.
Physical Damage would not be enough as information could still be extracted in a lab from the undamaged portion of the media or even from the pieces after the physical damage has been done.
Physical Destruction using a shredder, your microwave oven, melting it, would be very effective and the best choice for a non magnetic media such as a CDROM.
Fault tolerance countermeasures are designed to combat threats to design reliability. Tolerance and Reliability are almost synonymous, this was a good indication of the best choice. Reliability tools are tools such as fail over mechanism, load balancer, clustering tools, etc...
maximum tolerable downtime
Here are some examples of MTD values suggested by Shon Harris:NonEssential 30 Days
Normal 7 Days
Important 72 Hours
Urgent 24 Hours
Critical Minutes to hours
"Trusted paths provide trustworthy interfaces into privledged user functions and are intended to provide a way to ensure that any communications over that path cannot be intercepted or corrupted."
Fail soft
A system that experience a security issue would disable only the portion of the system being affected by the issue. The rest of the system would continue to function as expected. The component or service that failed would be isolated or protected from being abused.
Fail Safe
A fail-safe lock in the PHYSICAL security context will default to being unlocked in case of a power interruption.
A fail-safe mechanisms in the LOGICAL security context will default to being locked in case of problems or issues. For example if you have a firewall and it cannot apply the policy properly, it will default to NO access and all will be locked not allowing any packet to flow through without being inspected.
Fail open
A Fail Open mean that the mechanism will default to being unlocked in case of a failure or problem. This is very insecure. If you have a door access control mechanism that fail open then it means that the door would be unlocked and anyone could get through. A logical security mechanism would grant access and there would be no access control in place.
Fail closed
A Fail closed mechanism will default to being locked in case of a failure or problem. That would be a lot more secure than Fail Open for a logical access control mechanism.
Fail secure
A fail-secure in the logical or physical security context will default to being locked in case of a power interruption or a service that is not functioning properly. Nobody could exit the building and nobody would be able to come in either. In case of the logical context there is no access granted and everything is locked.
Traffic traversing your network can be inspected for problems or threats unless that data is encrypted using a protocol like SSL. It is not practical for most networks to decrypt such traffic for inspection. (Although it is possible.)
When an IDS/IPS inspects a packet it looks into the data field to compare the traffic against a signature database and can trigger on matches but if the network traffic is encrypted no match is ever detected and threats can go unmitigated.
Software updates, patches or service packs are all vital to a secure network. However secure (or insecure) software is when it is released, bugs or vulnerabilities are often times found. At that point software vendors must release patches to close the vulnerabilities.
While it is vital to apply these updates it is more important to test them prior to deployment because they could cause damage to your systems.
Essentially, you keep a test lab with computers similar to the ones in your organization and its software. Install the patches on the test computers to see how and if they are as functional afterwards.
This way, you can identify problematic updates or request the vendor to change the patch to support your particular systems.
RAID or Redundant Array of Individual Disks is a physical disk drive array that provides fault tolerance by spreading data across separate physical disks to both enhance speed and provide protection against individual disk failure.
There are a handful of variations to this concept and in this question we discuss RAID 3.
As we see in the image, data is striped across the three physical disk drives AND one additional disk for the parity information.
You might think of RAID 3 in math terms like 1+2+3=6, each number being a separate physical drive.
Example: 1+X+3=6. The missing value is obviously 2 because 1+2+3=4.
Essentially, this is how the RAID Controller hardware and therefore the computer can continue to operate after a drive fails.
There are a handful of variations to this concept and in this question we discuss RAID 3.
As we see in the image, data is striped across the three physical disk drives AND one additional disk for the parity information.
You might think of RAID 3 in math terms like 1+2+3=6, each number being a separate physical drive.
If you can imagine losing any one of these drives you could still deduce the value of the other numbers based upon the missing number.
Example: 1+X+3=6. The missing value is obviously 2 because 1+2+3=4.
Essentially, this is how the RAID Controller hardware and therefore the computer can continue to operate after a drive fails.
- RAID 1: Sorry, this isn't correct. RAID 1 is simply a primary drive with a mirror drive.
- RAID 2: This RAID configuration stripes data at the bit-level but quickly became useless and is now considered obsolete.
- RAID 4: RAID 4 comprises block-level striping, not byte-level striping AND, it has largely been replaced by RAID 6. It might have looked like the right answer because the question mentions three drives plus one totaling four drives. The RAID level numbers don't always follow the RAID name as was the case here.When people use a computer, the things they do to their computer using their account are executed on the system using the privileges granted to them.
Malware on a computer can do anything a user can do and often without their knowledge and the extent of the damage that malware can do depends upon the level of privileges the user has when the damage is done.Example; there are certain parts of the computer that an unprivileged account simply cannot effect like a Windows registry, a Unix /etc/shadow file or a hosts file.
However, with a root or administrator account, malware can do practically anything to that computer because it has few constraints. Since malware can do anything a user can, if the user is using an admin account or worse, an enterprise administrator account on a Microsoft Active Directory it becomes clear that it is absolutely critical that admin accounts NOT be used for everyday worker duties.
When people use a computer, the things they do to their computer using their account are executed on the system using the privileges granted to them.
Malware on a computer can do anything a user can do and often without their knowledge and the extent of the damage that malware can do depends upon the level of privileges the user has when the damage is done.Example; there are certain parts of the computer that an unprivileged account simply cannot effect like a Windows registry, a Unix /etc/shadow file or a hosts file.
However, with a root or administrator account, malware can do practically anything to that computer because it has few constraints. Since malware can do anything a user can, if the user is using an admin account or worse, an enterprise administrator account on a Microsoft Active Directory it becomes clear that it is absolutely critical that admin accounts NOT be used for everyday worker duties.
nvestigation is the process that converts the information extracted to a format that can be understood by investigator. It includes conversion of hexadecimal or binary data into readable characters or a format suitable for data analysis tool.
For your exam you should know below mentioned key elements of computer forensics during audit planning.
          - Be able to withstand a barrage of legal security
          - Be unambiguous and not open to misinterpretation.
          - Be easily referenced
          - Contains all information required to explain conclusions reached
          - Offer valid conclusions, opinions or recommendations when needed
          - Be created in timely manner. Â
Storage Area Network
A storage area network (SAN) is a dedicated network that provides access to consolidated, block level data storage. SANs are primarily used to enhance storage devices, such as disk arrays, tape libraries, and optical jukeboxes, accessible to servers so that the devices appear like locally attached devices to the operating system. A SAN typically has its own network of storage devices that are generally not accessible through the local area network (LAN) by other devices.
For your exam you should know below mentioned key elements of computer forensics during audit planning.
- Data Protection -Â To prevent sought-after information from being altered, all measures must be in place. It is important to establish specific protocol to inform appropriate parties that electronic evidence will be sought and not destroy it by any means.
- Data Acquisition €“ All information and data required should transferred into a controlled location; this includes all types of electronic media such as fixed disk drives and removable media. Each device must be checked to ensure that it is write protected. This may be achieved by using a device known as write blocker.
- Imaging -Â The Imaging is a process that allows one to obtain bit-for bit copy of a data to avoid damage of original data or information when multiple analyses may be performed. The imaging process is made to obtain residual data, such as deleted files, fragments of deleted files and other information present, from the disk for analysis. This is possible because imaging duplicates the disk surface, sector by sector.
- Extraction -  This process consist of identification and selection of data from the imaged data set. This process should include standards of quality, integrity and reliability. The extraction process includes software used and media where an image was made. The extraction process could include different sources such as system logs, firewall logs, audit trails and network management information.
- Interrogation -Â Integration is used to obtain prior indicators or relationships, including telephone numbers, IP addresses, and names of individuals from extracted data.
- Investigation/ Normalization -Â This process converts the information extracted to a format that can be understood by investigator. It includes conversion of hexadecimal or binary data into readable characters or a format suitable for data analysis tool.
- Reporting - The information obtained from computer forensic has limited value when it is not collected and reported in proper way. When an IS auditor writes report, he/she must include why the system was reviewed, how the computer data were reviewed and what conclusion were made from analysis. The report should achieve the following goals
          - Be able to withstand a barrage of legal security
          - Be unambiguous and not open to misinterpretation.
          - Be easily referenced
          - Contains all information required to explain conclusions reached
          - Offer valid conclusions, opinions or recommendations when needed
          - Be created in timely manner. Â
Storage Area Network
A storage area network (SAN) is a dedicated network that provides access to consolidated, block level data storage. SANs are primarily used to enhance storage devices, such as disk arrays, tape libraries, and optical jukeboxes, accessible to servers so that the devices appear like locally attached devices to the operating system. A SAN typically has its own network of storage devices that are generally not accessible through the local area network (LAN) by other devices.
Source of image: http://www.imexresearch.com/images/sasnassan-3.gif
Review of security controls
Management controls focus on the management of the IT security system and the management of risk for a system.
They are techniques and concerns that are normally addressed by management.
Routine evaluations and response to identified vulnerabilities are important elements of managing the risk of a system, thus considered management controls.
SECURITY CONTROLS: The management, operational, and technical controls (i.e., safeguards or countermeasures) prescribed for an information system to protect the confidentiality, integrity, and availability of the system and its information.
SECURITY CONTROL BASELINE: The set of minimum security controls defined for a low-impact, moderate-impact, or high-impact information system.
Management controls focus on the management of the IT security system and the management of risk for a system.
They are techniques and concerns that are normally addressed by management.
Routine evaluations and response to identified vulnerabilities are important elements of managing the risk of a system, thus considered management controls.
SECURITY CONTROLS: The management, operational, and technical controls (i.e., safeguards or countermeasures) prescribed for an information system to protect the confidentiality, integrity, and availability of the system and its information.
SECURITY CONTROL BASELINE: The set of minimum security controls defined for a low-impact, moderate-impact, or high-impact information system.
0 Response to "CISSP"
Post a Comment