Owasp Zap API Scanning with Authentication From Desktop to Docker (Part 3)

Owasp Zap API Scanning with Authentication From Desktop to Docker (Part 3)

Now that we have gotten things to work in desktop Zap, we need to move things over to dockerized Zap. Let’s go back to the api_scanning_with_auth_from_desktop_to_docker repository and look at some of the files to explain what they are.

zap-script.sh shown below is the main script that initializes zap, runs zap-api-scan.py and terminates the container. Please take a look at the contents of the file as there are comments in there that will explain more on what the script is doing.

Now let’s take a look at the files inside /mounted_dir


Now, this config.xml file was taken from the owasp/zap2docker-stable image and modified by adding the following snippet just below the tag. This xml snippet tells Zap to load up the 2 scripts we have seen before into Zap. /mounted_dir directory will be mounted to /zap/wrk, so that is why the location of the scripts are referencing /zap/wrk from inside the Zap container. 

In zap-script.sh, we will be copying this modified config.xml into the docker container after it starts up but before we run Zap via zap-api-scan.py


We have changed the logging levels to debug (red arrows) for troubleshooting purposes or you can leave it as info. We have also increased significantly the size of zap.log before it rotates (green arrow). That is because we want to have all our logging in a single file (instead of zap.log.1, zap.log.2 etc) which we then have 1 single file to copy out (done in zap-script.sh) from the docker container after zap-api-scan.py finishes its run but before the container gets shutdown and removed, into the mounted_dir directory for further investigation if needed.


This file was also taken from the owasp/zap2docker-stable image, modified and later will be copied over to the docker container before Zap starts up, in zap-script.sh. The changes are required as we need to inform Zap which context to use as specifying the context file to zap-api-scan.py using the -n flag does not instruct Zap to use that context. It just initializes Zap with it. The code changes below had to be added to zap-api-scan.py in instruct Zap to use the context that we just uploaded (red arrow) by first getting the context id.

Also, in the green box, we had to had to add in these 2 lines to get Zap to load the token script and enable it. Specifying it in the config.xml does not do that.

Now as we go further down the file, we will encounter zap_active_scan() function where we will specify the context id that we got earlier as an input parameter. The zap_active_scan() function also needed to be modified to accept this additional parameter in zap_common.py as we will see later.


If you scroll down to the zap_active_scan() function, it was modified to accept an additional parameter called contextId (red arrow). It is then used as a input parameter (green arrow) to zap.ascan.scan_as_user() function. Note that the the first contextid is all lowercase in contextid=contextId. The scan_as_user() function (purple arrow) is an existing function of the ascan class and we are calling it instead of the original scan() function as it allowed us to specify the user id (yellow arrow)which in this case is user 68. You can find this number in the Authorization_Code_Flow.context as mentioned in Part 1

The Setup

So now we have our linux environment with all the pre-requisites setup, we can clone both the repositories into the location as shown below. 

Go into ~/api_scanning_with_auth_from_desktop_to_docker, execute chmod 777 zap-script.sh to make the file executable because this is the primary script that holds everything together.

Open up 2 more terminal windows. So we have 3 terminal windows altogether. Terminal 1 is for executing zap-script.sh, terminal 2,3 are for running the resource and authorization servers respectively.

Going into terminal 2, we execute cd oauth2_auth_resource_servers/resource_server to change into that directory and start up the authorization server by executing using mvn spring-boot:run. When running  this command for the first time, it will take a while as we are downloading all the dependencies to run the server. When the server is running, you should see the below output. Note that the resource server runs on port 8084.

After the resource server is up and running, we also do the same with terminal 3, execute cd oauth2_auth_resource_servers/auth_server to change into that directory and run mvn spring-boot:run to start the authorization server. Note that the authorization server runs on port 8081.

When both servers are running, we go back to terminal 1, in api_scanning_with_auth_from_desktop_to_docker directory and execute ./zap-script.sh. You should see the output of zap-api-scan.py and once the script finishes after some time, you will be able to access the output html report and zap.log in the mounted_dir directory.

Leave a Comment

Your email address will not be published. Required fields are marked *