Anomaly Detection Systems and Generalization
Network servers are always vulnerable to attacks. Therefore security measures to protect vulnerable software are an essential part of securing a system. Anomaly detection systems can enhance the state of affairs as they can separately learn a model of normal behavior from a set of empirical observations. Then uses the model to discover novel attacks.
In most cases, generalization is necessary for accurate anomaly detection, that is, anomaly detection systems represent more instances than were in the training data set. It is demanded to create a database of HTTP attacks against web servers and their applications. Under generalization, locations can be identified automatically which leads to the more rapid identification of the heuristics needed to permit an anomaly detection system to attain the required accuracy for production use.
Security is an essential requirement and we have to develop and deploy systems without major security vulnerabilities. Also, need to take extra measures to secure the system against the next attack.
One method to secure the systems is to craft specific defenses for every observed problem, either in the form of attack signatures or code patches. However, both strategies need a human to analyze every pain point and develop a solution that limits the feasible response time to a timescale. Considering attacks by self-replicating programs can spread in a matter of seconds and therefore automated mechanisms that can identify and respond to threats in real-time are needed. Anomaly detection systems can potentially detect novel attacks without human intervention.
An anomaly detection system follows a model of normal behavior from the training set. Normal behavior implies that attackers are not using the system to carry out tasks outside the set of those the administrators intend for it to perform. The observations that differ from the model are labeled anomalies.
Using machine learning, an anomaly detection system builds a model of normal behavior. The data must generalize if a learning system is to do more than simply memorize the training data – implies generate a set that represents an example. When an anomaly detection system generalizes, it accepts input similar to, but not necessarily identical to, instances from the set of instances considered the normal set is larger than the set of instances in the empirical data.
In most anomaly detection systems, the set of possible legal input is infinite and the entire set of normal behaviors is unknown and might be changing over time. In such a case, the anomaly detection system should use a partial set of training data. For this kind of system, the system accepts more instances than we’re given in the empirical data, generalization is a requirement.
The objective for an anomaly detection system is a model that accurately narrates normal behavior.
- Overgeneralize:
Some cases like if the algorithm generalizes in excessive results the normal set is too large , such cases the attacks close enough to the empirical data could be determined as normal or a false negative, limiting the usefulness of the system.
- Under generalize:
A system that simply memorizes the empirical data will need enough storage for the complete set of normal and is not possible when the normal set is unknown or infinite. In such a case, this would under generalize and erroneously flag normal events as anomalous or false positives. An under generalizing system usually misses normal instances that are slight variants of empirical data.
To sum up, correct generalization is an important state for accurate anomaly detection. An anomaly detection system that has sufficient accuracy to be deployed, must neither under- nor overgeneralize. If generalization is properly controlled, then the anomaly detection will be more accurate. The anomaly detection system model uses the data representation in a manner allowing the discrimination of normal from anomalous data instances.