This testbed provides a set of subject web applications that are vulnerable to SQL Injection Attacks, along with test inputs that represent malicious and legitimate accesses to the web application. The purpose of the testbed it to facilitate the evaluation of SQL Injection detection and prevention techniques. We originally developed the testbed to evaluate our AMNESIA approach, and later expanded on it to evaluate our WASP approach. Please cite the ASE 2005 and FSE 2006 papers as the source for the testbed.

Source of Subject Applications

Table 1: Information about subject applications.
Office Talk4,5434064
Employee Directory5,6582310

Our set of subjects consists of seven Web applications that accept user input via Web forms and use the input to build queries to an underlying database. Five of the seven applications are commercial applications that we obtained from GotoCode: Employee Directory, Bookstore, Events, Classifieds, and Portal. The other two, Checkers and OfficeTalk, are applications developed by students and have been used in previous related studies (Gould, Su, Devanbu; ICSE 2004).

For each subject, Table 1 provides the size in terms of lines of code (LOC), the number of database interaction points (DBIs), and the total number of servlets (Servlets).

Generation of Test Inputs

For each application in the testbed, there are two sets of inputs: LEGIT, which consists of legitimate inputs for the application, and ATTACK, which consists of attempted SQLIAs. The inputs were generated independently by a Master's level student with experience in developing commercial penetration testing tools for Web applications.

Table 2: Test input generation.
Office Talk5,888499 424
Employee Directory6,3982,066658

To create the ATTACK set, the student first built a set of potential attack strings by surveying different sources: exploits developed by professional penetration-testing teams to take advantage of SQL-injection vulnerabilities; online vulnerability reports, such as US-CERT and CERT/CC Advisories; and information extracted from several security-related mailing lists. The resulting set of attack strings contained 30 unique attacks that had been used against applications similar to the ones in the testbed. All types of attacks reported in the literature were represented in this set except for multi-phase attacks such as overly-descriptive error messages and second-order injections. Since multi-phase attacks require human intervention and interpretation, we omitted them to keep our testbed fully automated. The student then generated a complete set of inputs for each servlet's injectable parameters using values from the set of initial attack strings and legitimate values. The resulting ATTACK set contained a broad range of potential SQLIAs.

The LEGIT set was created in a similar fashion. However, instead of using attack strings to generate sets of parameters, the student used legitimate values. To create "interesting" legitimate values, we asked the student to create inputs that would stress and possibly break naive SQLIA detection techniques (e.g., techniques based on simple identification of keywords or special characters in the input). The result was a set of legitimate inputs that contained SQL keywords, operators, and troublesome characters, such as single quotes and comment operators, but in a way that should not cause an attack.

Useful Setup Information

  1. Required Software: Tomcat 5+, Java 1.4+, MySQL 5+
  2. Each application is distributed as a WAR file and be deployed as is if the database is set up correctly.
  3. To correctly set up an application's database, use the DB initialization script included in each application's WAR file.
  4. Four files contain the test inputs, the one with the suffix "Legit_URLS" contains the LEGIT set, the others comprise the ATTACK set.
  5. A sample script "TestAmnesia" is included to show you how to run/process the results.
  6. The "~~~" in the test inputs is a marker we used to denote intended attacks.