Why Web Application Security Securing a company’s web applications is today’s most overlooked aspect of securing the enterprise. Hacking is on the rise with as many as 75% of cyber attacks done through the web and via web applications.
Most corporations have secured their data at the network level, but have overlooked the crucial step of checking whether their web applications are vulnerable to attack.
Web applications raise certain security concerns. 1. To deliver the service (intended by design) to customers, web applications must be online and available 24x7x365 2. This means that they are always publicly available and cannot discriminate between legitimate users and hackers 3. To function properly web applications must have direct access to backend databases that contain sensitive information. 4. Most web applications are custom-made and rarely pass through the rigorous quality assurance checks of off-the-shelf applications 5. Through a lack of awareness of the nature of hack attacks, organisations view the web application layer as part of the network layer when it comes to security issues.
The Jeffrey Rubin Story In a 2005 review published by Information Week, a prominent security expert called Jeffrey Rubin, narrates his experience with a successful hack attack. The following is a citation from his article (the full reference is given at the end of this article):
“We’re like most Web developers who use the Microsoft platform … Although we try to stay up to date with patches and service packs, we realize attackers often go after application, rather than network, vulnerabilities. A colleague suggested we install a hardware firewall to prevent future attacks. Not a bad suggestion, but hardly a cure-all given that we have Ports 21, 80 and 443 and our SQL server (on a nonstandard port) wide open for development purposes. After all, we’re in the business of developing dynamic Web pages, and our clients are all over the country”.
Jeff’s story is striking simply because (a) developers, like all, are also prone to error despite all the precautions they take to sanitize their developed applications and (b) as an expert he was still lulled into a false sense of security by applying the latest patches and service packs. Jeff’s story, sadly, is not unique and arises from misconceiving the security infrastructure of an organization and the solutions available to assist people in their fight to protect their data.
Since many organizations do not monitor online activity at the web application level, hackers have free reign and even with the tiniest of loop holes in a company’s web application code, any experienced hacker can break in using only a web browser and a dose of creativity and determination. The slack security also means that attempted attacks will go unnoticed as companies react only to successful hacks. This means that companies will fix the situation AFTER the damage is done. Finally, most hack attacks are discovered months after the initial breach simply because attackers do not want and will not leave an audit trial.
Systems administrators, CTOs and business people alike conceive cyber intrusion as standard physical intrusion: a thief in your house leaves markers, e.g., a broken window or a forced lock. In web application attacks this physical evidence is inexistent.
The Security Infrastructure of an Organization It is convenient to think of the infrastructure of an organization as one with various layers. In the same way you would protect against rust by applying a variety of paints, chemicals and anti-oxidants in layers, a systems administrator puts in place several specialized security solutions each addressing specific problem areas.
These security layers represent a holistic outlook that looks at security as hardened measures taken to minimize intrusion risks and maximize the protection around the key asset of any organization, its data.
Standard security layers include:
- The User layer containing software including personal firewalls, anti-root kits, registry cleaners, backup, anti-virus, anti-phishing and anti-spy/adware
- The Transport layer including SSL encryption, HTTPS and similar protocols
- The Access layer with access control, authentication, crypography, firewalls, VPNs, Web Application Firewalls
- The Network layer with firewalls, network scanners, VPNs, and intrusion detection.
The Fifth layer is the Application layer and must include web siote and web vulnerability scanning. Source code analysis fits in here
Web Vulnerability Scanners are not Network Scanners Web vulnerability scanners (e.g., Acunetix WVS, Spi Dynamics WebInspect) are not network scanners (e.g., Qualys, Nessus).
Whereas network security scanners analyze the security of assets on the network for possible vulnerabilities, Web Vulnerability Scanners (WVS) scan and analyse web applications (e.g., shopping carts, forms, login pages, dynamic content) for any gaps resulting from improper coding that may be manipulated by hackers.
For example, it may be possible to trick a login form to believe that you have administration rights by injecting specifically-crafted SQL (the language understood by databases) commands. This is only possible if the inputs (i.e., username and/or password fields) are not properly sanitized (i.e., made invulnerable) and sent directly with the SQL query to the database. This is SQL Injection!
Network security defense provides no protection against such web application attacks since these attacks are launched on port 80 (default for websites) which has to remain open to allow regular operation of the business.
What is needed is a web application scanner / web vulnerability scanner or a black-box testing tool.
Black box Testing Black box testing is simply a test design methodology.. In web application black box testing, the web application itself is treated as a whole without analyzing the internal logic and structure. Typically, web application scanners would see whether the web application as a whole could be manipulated to get access to the database. Modern technology allows for a great degree of automation, in effect, reducing the manual input required in testing web applications.
It is important to say reducing and not minimizing or doing away with. As any security consultant will tell you, automation will never replace the intelligence and creativity of human intervention.
In general, automated scanners first crawl an entire website, analyzing in-depth each file they would find and displaying the entire website structure. After this discovery stage, the scanner performs an automatic audit for vulnerabilities by launching a series of hacking attacks, in effect emulating a hacker. Scanners would analyze each page for places where data could be input and will subsequently attempt all the different input combinations. The scanners would check for vulnerabilities on web servers (on open ports), all web applications and in website content itself. The more robust products launch such attacks intelligently using varying degrees of heuristics.
Heuristic Web Scanning It is important to understand that web vulnerability scanning should not be limited to scanning known applications (e.g. off-the-shelf shopping carts) and/or module vulnerabilities (e.g. SQL injection in phpBB Login Form) against a pre-determined library of known issues. If it were to do so, custom applications would remain untested for their vulnerabilities. This is the main weakness of products that are based on matching vulnerability signatures.
Consider anti-virus software as an example. Standard antivirus products scan for thousands of known viruses including old and known viruses (even ones that were created for old Windows 95 systems). In this day and age you would rarely encounter this OS but in the minds of consumers what is most important is “how many viruses does this software detect?”. In reality, having the latest AV will give you protection for all but the viruses running in the wild. And it is these viruses that create the greatest damage. Standard AV products without the right technologies will not detect a virus in the wild if these could only match for “known” viruses. Good antivirus technology will allow heuristic file checking or intelligent ways of trying to identify patterns of application behavior which can result in a virus.
Web vulnerability scanning works in a very similar way. It would be useless to detect the known vulnerabilities of known applications alone. A significant degree of heuristics is involved in detecting vulnerabilities since hackers are extremely creative and launch their attacks against bespoke web applications to create maximum impact.
Of course, such an approach does give out false positives but even here there lies misconception and confusion. False positives are caused because an automated scan will flag issues that may seem to be a vulnerability. Automation is an invaluable aid and the accuracy of a scan depends on (a) how well your site is crawled to establish its structure and various components and links, and (b) on the ability of the scanner to leverage intelligently the various hacking methods and techniques against web applications.
Automated scanning will lead to false positives. Of course, this level of technological complexity does not lead to zero false positives. That is impossible. An automated scan will always generate false positives whichever product you use.
We always recommend automated scans to be supplemented with manual scans – this is probably one of the points that all security experts emphasize. Sadly, companies do not recognize the importance of the manual input. If you want your web applications to be secure you must spend a considerable amount of time checking the automated side of things. This is not to say that automation is inaccurate – on the contrary, it is very accurate and has cut down on much of the work. The automated scan will help you flag the possible problems including the false positives and prompt further manual investigation.
In web application security, it is better to have false positives than nothing at all.
Source Code Analysis Another set of products related to web vulnerability scanning are source code analyzers but these work differently to web vulnerability scanners. Source code analyzers are white-box testing tools that assist developers in their work by automatically analyzing the internal structure and logic of source code directly for errors and security loopholes. The level of complexity of such products is based on the complexity of logic of certain applications and the variety of coding languages. This means that few stable products exist on the market even though the technology is moving very fast.