I'm having difficulty in figuring how to fix this from happening.
When I run my tests from the CLI, one of my test cases will fail 2 out of every 3 runs, but when run from the GUI it will pass 100% of the time. Looking at the test reports back in iTest of this step failing, the issue is with the response from the current step not being displayed until the following step.
For example:
> command w/analysis - Analysis fails here when run on CLI because the response is empty
> command - Reponse of previous command along with current command's response shows up on this commands response
Why is the response delayed when run from the CLI, but not from the GUI?
Answer by PaulD · Apr 30, 2009 at 11:05 PM
I suspect that you have some timing sensitivity, and that things are running at a slightly different speed via the command line.
I expect that this is happening because prompts and completion properties on your steps are not set up correctly. Is it possible that the step has completion set to AUTO OR IDLE rather than AUTO AND IDLE, or perhaps TIMED. Unless you are using AUTO AND IDLE, you are probably going to have timing sensitivities in your test case, and that will result in behavior like you are seeing here.
Perhaps you could post the test case and associated session profile and/or testbed files?
PaulD wrote:
I suspect that you have some timing sensitivity, and that things are running at a slightly different speed via the command line.
I expect that this is happening because prompts and completion properties on your steps are not set up correctly. Is it possible that the step has completion set to AUTO OR IDLE rather than AUTO AND IDLE, or perhaps TIMED. Unless you are using AUTO AND IDLE, you are probably going to have timing sensitivities in your test case, and that will result in behavior like you are seeing here.
Perhaps you could post the test case and associated session profile and/or testbed files?
I checked the completion setting for my testbed devices, and they are both set to AUTO AND IDLE, so that can't be the issue for this particular situation unless you can set this in multiple locations. I do agree with yoru assessment that it has to be a timing issue, but it seems weird that I'm only having a timing issue on this certain command out of all my testcases.
So I need to check my prompts? I'm not sure why the prompt would necessarily cause a step's response to show up in the next step's reponse though, unless I somehow have a prompt that is a blank line? The only prompt, I can see, that pertains to this particular step is the name of the switch i'm connecting to's config prompt.
The testcase itself is basically this:
open device
enable mode
configure mode
configure something
do show run | include ^version <- This is basically the command in question. I'm filtering the response to only show me where the line starts with version. Checking to see if version X.X is in the response and is showing me the right version. It's not actually what I'm doing, but exact same idea.
exit configure mode
close switch
I attached the device it's happening to in a testbed file. It should have all the same settings and prompts minus the IP and port.
Message Edited by jamblcr on 05-01-2009 10:34 AM Message Edited by jamblcr on 05-01-2009 01:15 PMI think the problem is apparent by looking at the prompts configured. These include several prompts that look like console log messages like:
<item name="prompt6" MatchMethod="WILDCARD">
<Content>*%SYS-5-CONFIG_I: Configured from console by console</Content>
</item>
<item name="prompt10" MatchMethod="WILDCARD">
<Content>*: %SYS-5-CONFIG_I: Configured from console by console</Content>
</item>
(and others)
This suggests to me that perhaps you haven't turned off console logging on the DUT. So imagine, now, that while executing you get a console log message coming out asynchronously right after the previous command? Then iTest will consider that console log message a prompt and will move on.
You need to turn off console logging on the device. If you need to use the log messages in your test case, I believe there is a command to explicitly retrieve them on demand.
Can you try that?
PaulD wrote:
I think the problem is apparent by looking at the prompts configured. These include several prompts that look like console log messages like:
<item name="prompt6" MatchMethod="WILDCARD">
<Content>*%SYS-5-CONFIG_I: Configured from console by console</Content>
</item>
<item name="prompt10" MatchMethod="WILDCARD">
<Content>*: %SYS-5-CONFIG_I: Configured from console by console</Content>
</item>
(and others)
This suggests to me that perhaps you haven't turned off console logging on the DUT. So imagine, now, that while executing you get a console log message coming out asynchronously right after the previous command? Then iTest will consider that console log message a prompt and will move on.
You need to turn off console logging on the device. If you need to use the log messages in your test case, I believe there is a command to explicitly retrieve them on demand.
Can you try that?
I'll try that, but if that was the case, wouldn't I be seeing the console logging message in the show run command's response? Right now the response is simply empty, there's nothing there. If the console logging message shows up, it would then interpret that message as the response to my previous command, right?
No. We extract the prompt. You would find it, however, in the Structure view when you look at that step -- as we store the prompt in an element in the structured data. I believe that you would, however, probably see one of the normal prompts in the response body.
So no guarantees that this is your problem. But if you haven't disabled console logging, you'll run into this problem sooner or later.
If this is not the current problem, could you post an exported test report (*.fftz)?
PaulD wrote:
No. We extract the prompt. You would find it, however, in the Structure view when you look at that step -- as we store the prompt in an element in the structured data. I believe that you would, however, probably see one of the normal prompts in the response body.
Looking at the structure view of that step in a failed test report, that's the issue. It's seeing the console logging message as a prompt, extracting it from the response because it's a prompt, and moving to the next command before it can finish gathering the response from the show run. The times it passes, the console logging message doesn't show up until a later step.
Copyright 2008- Spirent Communications, all rights reserved. Terms and Conditions.