Troubleshooting jobs
Recorded login sequence
Not all the values were captured during recording. Delete the URL for that value or the entire sequence and record again. Sometimes errors are made during the recording process and values are not captured properly.
Why are duplicate URLs appearing in the list? The recorded login sequence records everything. You must go through the list and delete what you do not want. If you have recorded a login sequence a few times, this might be why you are seeing duplicates in the list.
I have errors with my sequence; it does not work, what do I do? View the HTTP requests and look for incorrect session IDs, cookies, or other parameters (ensure that the correct ones are being sent during login).
How do I ensure that the correct parameters are being sent or check to see whether the parameters that I have are correct? Select the URL in the Login Management page and click View HTTP Requests.
In-session detection failed. What happened?
Any number of events could have occurred. See CRWAE1501W.
Not all URLs were found on the site
Use the Manual Explore feature to manually explore your site and record URLs along the way.
Scan progresses very slowly
If your scan is progressing slowly, and so are the number of entities tested and pages scanned statistics for the job, check the following possibilities:
- Is your IP being blocked by IDS systems on the server being scanned? If you cannot open a browser on the agent server and browse the site, perhaps the security scan triggered a security alert on the server being scanned. If this is the case, contact the site owner and ask for approval to scan the site.
- Ensure you do not have a firewall running on the Agent Server computer.
- Check the scan log while the job is running to troubleshoot
what's
happening. You can export the scan log as a text file by clicking Export on
the job's statistics page. If you export the log while the scan is
running, only the messages up to the point of export will be included
in the file.Note: Scan log messages might not appear in the exact order that the scan events occur. Specifically, many messages are triggered when links are found, rather than when they are actually visited.
Scan does not stop when scanning a WebSphere® Portal
When the scan sends the encoded WebSphere® Portal URL to the decoding web service, it expects the returned navigational state to contain static ResourceIDs. If this is not the case for your WebSphere® Portal, contact your Product Administrator to change the configuration.
Job cannot run
Check to see if the job has a Job Owner. A Job Owner is required to perform scans or tests. If the job has no owner, such as when a user is deleted, then the job cannot be run.
Scan does not see all or part of my application
The scan might not be logging into the site. Try recording a login sequence.
How can I get the scan to recognize certain areas of my application?
Use manual explore to capture URLs that are not recognized automatically because of nonstandard JavaScript™ postbacks such as links, embedded JavaScript™ or flash links, orphan pages. Because the browser records everything it encounters, you should only keep what is necessary and delete the rest, such as gifs, css, js, and jpeg files; otherwise, they will be included in the scan job. Use the Show URLs button to hide or show these files.
Content scan stops after scanning a few pages
To scan further into the site:
- Clear the In starting domains, only scan links in and below the directory of each starting URL option on the What to Scan page. The scan is being prevented from progressing further because it encountered a URL that is outside the starting directory.
- Add the URL to your list of Starting URLs on the What to Scan page.
Why did my content scan stop or get suspended?
- When the page limit parameter for your installation is reached, the scan stops and an entry is made in the job's log.
- Network problems that cause the connections between the Enterprise Console, agent, and database to be lost.
- The database server ran out of memory.
Error message when attempting to run a scan job via the browser
Attempts to access the Enterprise Console generate the following error message: Description: The remote server machine does not exist or is unavailable Number: 800a01ce
- Open the IIS Internet Services Manager.
- Expand the computer icon, and then expand Default website.
- Right-click on the virtual directory, and then select Properties.
- On the Virtual Directory tab, click Unload.
- Click OK, and then close the IIS Internet Services Manager.
- Close the browsers, and then reopen the Enterprise Console.
What happened to my Session IDs?
Session IDs are removed from URLs by normalizing them. To achieve consistent results, normalization is performed on all parameters and cookies in a particular domain, so Session IDs are entered as parameter exclusions when you edit the properties of a domain.
What is an invalid starting URL?
An invalid starting URL is one that the scan cannot validate on your network. You should always test a starting URL in the browser first, but remember that it will be the Enterprise Console's server (the computer where the product is installed), acting as the service account, who will actually submit the starting URL to the browser. While you might be able to submit the URL, the scan might not because of these two factors. You can add an incorrect starting URL to a scan if you foresee it becoming valid before the scan starts. There are other reasons why you way want to add in an invalid starting URL, see Reasons for adding non-valid Starting URLs.
Why did my scan suspend?
The first thing you should do when a scan suspends is to turn off JavaScript™ execute and rerun the job. The JavaScript™ execute option is found on the content scan job's Explore Options page.
Servers and domains
Why is domain A being reported as internal, when I consider it to be external? The domain was added to the global list of domains.
How do I remove external domains from my reports? Add the domains as page filters to the report pack. Although you can also exclude a domain from a job, excluding the domains through a report pack is the better method. Even though you may exclude a domain from a job, the report pack could include another job that scans the same domain.
XRules
Why is my job running slower? It might be because you added an XRule to it. To disable the XRule indefinitely, open the XRule and click Always disable XRule (if job experiences problems). If the XRule is currently used by other items, it will not run while it is disabled. You can always turn the XRule on again after you determine what is wrong with it.
Connections
The FTP checking on my site is not working. Try selecting Use passive mode (PASV) FTP transfers in the FTP Settings section of the Connections page. Some FTP proxy servers require that the client use Passive mode while accessing the proxy.
Blackout periods
Will the blackout period apply to external domains? No, they only apply to internal domains.
How to I override an auto-suspended job? Find the job in the Folder Content Summary, select it and click the Run icon (). A warning message will appear to confirm you want to override the blackout period.
How to I find out how long a job took to run when it has a blackout period? Check the job statistics page.