Don’t worry this is added by the recorder automatically. To paraphrase help: Placing WebPageParseUrl before a page-level API call results in additional “parsing rules” being applied during downloading of the page. It modifies the html-parser to find additional hyperlinks (for subsequent WebPageLink functions) and URLs, which can be used in subsequent WebPageSetActionUrl or WebPageQueryParsedUrl commands. There is a more advanced doc on “Advanced Context Management Techniques” here documentation.microfocus.com/.../index.jsp cheers Rod
↧
Forum Post: RE: WebPageParseUrl Command
↧
Forum Post: RE: Creating dynamic forms in Silk Performer
Check help for WebForm* functions e.g. WebFormValuePairInsert Function SilkPerformer has good parsing functionality, however it may be time consuming to recreate App logic to dynamically build a form. May be worth trying an AJAX project type which drives the browser. Good luck! Rod
↧
↧
Forum Post: RE: Does SilkPerformer support poplist in OAF pages?
Help has a OraFormsParseListValue function: Returns the selected value of a combo box, list box, or pop-up list box. Could be that your script replay is going wrong before your list is returned. cheers Rod NOTE - 2011/8.3 is an antique now :) Worth upgrading to latest 15.5 version.
↧
Forum Post: RE: Same throughput when running performance test in EDGE and LTE network
Just to close the loop here for time travellers. Support incident identified that the profiles with the different speeds were not getting called. Adding additional UserTypes for the different profiles resolved issue.
↧
Wiki Page: Why do I receive the error "HTTP: 1011 - unexpected connection close during read" during replay and how can I tackle this error?
This error has a number of possible causes, though in most instances it is the result of a failure or limitation in one of the components of the System Under Test (SUT), usually under load conditions. Definition of the error: The "HTTP 1011 - unexpected connection close during read" error occurs when an open connection to the server, on which the SilkPerformer replay engine expects to receive more data, is closed before that data is received and without the normal TCP handshake for closing connections gracefully. This handshake should be as follows: i. Server: FIN ii. Client: ACK iii. Client: FIN iv. Server: ACK Thus, for example, the error will occur if the server under test sends a TCP RESET to close the connection unexpectedly. It may also occur if the connection was broken at any point between the client and the server. Cases logged to Technical Support have been traced to issues with switches, proxy servers, SSL accelerators and loadbalancers. Note that these errors are reported more accurately in SilkPerformer 5.x than 4.x, so it is possible that tests than ran without error in 4.x will begin to report this error in tests using 5.x. Effect on the end-user: The effect on the end user could be anything from almost imperceptible to critical. Consider the difference between closing a connection while downloading spacer.gif versus the site homepage. Without further investigation it is impossible to say and a careful study of the replay log and TrueLog on Error files need to be made. This will show exactly what the users experiencing such errors would see. The HTTP 1011 error often occurs proceeded by a SSL security warning: WebPageUrl(Security: 267 - wrong version number, SSLRecv() failed) WebPageUrl(HTTP: 1011 - unexpected connection close during read., recv()) This is usually the result of a component (e.g loadbalancer) sending an incorrect (corrupted) SSL version number, which leads to the Security: 267 - wrong version number, and subsequent connection close. Old KB# 19244
↧
↧
Forum Post: RE: Issue in recording pop up box with IE 11
Hi Ankitshah, The mentioned warning occurs if Silk Performer cannot find a position of the element which is visible. For "native input" which simulates user input more accurate the DOM element (identified by the specified locator) has to be visible. Sometimes the DOM element is overlapped by another DOM element. In this case the "native" input does not work and Silk Performer performs a fallback which is sending a input event directly to the DOM element. This fallback is not as accurate as the "native input" but works in most cases. Therefore the message is no error but a warning. If the script works fine and all user actions are performed as expected you can just ignore the warning. If it's a problem that the warning is displayed you can override the severity of the error in the profile settings or via a BDL script function (see help for details). Hope that helped. Philip
↧
Forum Post: Connection refused on replay
Hi, I recorded a number of requests. When I try to replay them, I get the following error: WebFormPost(WinSock: 10061 - Connection refused, host="localhost:8443", attempts=3) Unlike all of the other cases of this error that Google turns up http://community.microfocus.com/borland/test/silk_performer_-_application_performance_testing/w/knowledge_base/23835.on-replay-why-do-i-get-the-error-webtcpipconnect-winsock-10061-connection-refused-127-0-0-1.aspx http://community.microfocus.com/borland/test/silk_performer_-_application_performance_testing/w/knowledge_base/16576.during-replay-why-do-i-get-the-error-webtcpipconnect-winsock-10061-connection-refused-127-0-0-1-on-port-5152-specifically.aspx the error does not come from an unwanted request in my case. localhost:8443 is exactly the application that I want to test. It works in the browser and it works with other load testing tools -- only Silkperformer is not able to reach it. I also made sure that the system's proxy settings have been properly reset (i.e. after the recording, there is not proxy, and obviously the "localhost" should be reachable without proxy). Turning the firewall off didn't help either. A while back I tested the same application deployed on GlassFish 4 and I am pretty sure that it worked. Today, it is on WildFly 8.1 and it doesn't work anymore (with Silkperformer). Note that it's HTTPS. Any ideas how I could make this work? Thanks for any help! Philipp
↧
Forum Post: RE: Connection refused on replay
Hi Philipp, Is it possible for you to share your exported Silk Performer project showing this issue? Best Regards, Neil
↧
Wiki Page: Could not create the specified Projects folder!
When creating a new Silk Performer project - SILK PERFORMER | FILE | NEW PROJECT, the project files are automatically written to the directory which has been specified within the Directories section of the System Settings - SILK PERFORMER | SETTINGS | WORKBENCH | DIRECTORIES (Projects).By default, the directory specified within this location, will be the 'My Documents' folder of the user account which launches the Silk Performer workbench: This directory can be changed to an alternate location of the users choosing, however it is important to verify that the user account (which is creating the files) has access to the custom directory, i.e. Read/Write permissions. Should the user account contain insufficient permissions, a "Could not create the specified Projects folder!" error will be displayed when trying to create the Project within Silk Performer To verify whether the user account contains the necessary permissions, perform a check by trying to create/save a text file to the desired location.
↧
↧
Blog Post: WebSockets
With Silk Performer, you can load test a ton of various applications and technologies. Yet, Silk Performer 15.5 adds another capability in regards of web technologies: You can now test connections and web applications that make use of the WebSocket protocol. WebSockets are such a useful technology, as they allow to establish a full-duplex (bidirectional) connection between client and server. Therefore, WebSockets are an alternative to communication models such as polling and long-polling. How does WebSocket work? The WebSocket protocol is a TCP-based network protocol that allows to establish a full-duplex (bidirectional) connection betwen client and server. A conventional HTTP connection follows the request-response principle: each client request triggers a server response. To establish a WebSocket connection, the client sends a WebSocket upgrade request embedded in an HTTP message. Once acknowledged by the server, the open connection can be used for communication by both the server and the client at any time. Using the WebSocket protocol results in reduced network traffic and latency. It is an alternative to communication models such as polling and long-polling that were used to simulate full-duplex connections. How do I record a WebSocket connection? As you are accustomed to, you can use the Silk Performer Recorder to do all the scripting work for you. Just create a new project, select the application type Web (Async) , and enter the URL of the website that uses WebSocket. Of course you can also do the scripting manually. The following new BDL functions are available for this purpose: WebSocketConnect WebSocketConnectSync WebSocketClose WebSocketSendTextMessage WebSocketSendBinaryMessage WebSocketReceiveMessage To get more information on these functions, take a look at the BDL Reference: Asynchronous Communication Functions Here you can see a couple of WebSocket functions in context of a BDL script: Can you show me a real-world example? Of course, we also have a real-world example for you to illustrate how recording WebSocket connections actually works. In the video below, we've recorded a demo stock exchange application that constantly sends price updates through an open WebSocket connection. Watch the video to get to know the workflow in its entirety: (Please visit the site to view this video) To learn more about web applications and how to load test them with Silk Performer, take a look at these Help topics: Web Applications Support To get to know the other new features and enhancements of Silk Performer 15.5: What's New in Silk Performer 15.5
↧
Forum Post: RE: Unable to create Java VM
Sry, just back from holiday. The Server Configuration settings page needs a Java VM. Please check perfExpJVM.xml in your install directory.
↧
Forum Post: Parameterization
In the sample demo borland application,we have different driving record details..say Excellent,Good etc.. The Requirement is for different users we need to have different driving records.In the recorded script,i have selected Excellent option so the code is BrowserRadioButtonSelect("//INPUT[@id='autoquote:Type:0']"); Please let me know how we can modify the script to ensure that we use different driving record for different users. Thanks for your help.
↧
Forum Post: RE: Parameterization
Hi, If you open the sample demo application using Silk Performer's Browser Application (accessed from the tools menu) and browse to that selection you can use the "pause | break" keyboard button to show the DOM element that is used. Here you can see the different values for each option: Excellent - I have a clean record: autoquote:type:0 Good - I have one or two minor violations: autoquote:type:1 Fair - I've caused one or more accidents: autoquote:type:2 Poor - I don't have a good driving record: autoquote:type:3 You can then use these values to customise the user input. One option is to create a CSV file containing the options and then use this to customise the value in the script. This approach is described in the following video: community.microfocus.com/.../773.customization-of-form-data-from-script-using-a-csv-file.aspx Best Regards, Neil
↧
↧
Forum Post: RE: Parameterization
Hi Varada, It is possible to include custom logic within your script to replicate the required behavior, but you need to understand the XPath locator strings of the radio buttons prior to implementing. For example, using the example you provided, the pattern of your Xpath locator strings maybe Excellent - //INPUT[@id='autoquote:Type:0'], Good - //INPUT[@id='autoquote:Type:1'] etc, we could make use of the GetUserId function to ensure that User 1 selects a certain radio button, whilst 2 selects the other. The example script below will look to interact with elements based upon the User Id, so for example, the user with User Id 1 will click on the Radio button Excellent, whilst the virtual user with user id 2 will click on Good. dcltrans transaction TInit begin end TInit; transaction TMain var wnd1, i : number; begin i := GetUserId(); BrowserStart(BROWSER_MODE_DEFAULT, 744, 274); BrowserSetReplayBehavior(SP_15_5); wnd1 := BrowserGetActiveWindow("wnd1"); BrowserNavigate("C:\\inetpub\\wwwroot\\list.html"); ThinkTime(5.6); if i = 1 then BrowserRadioButtonSelect("//INPUT[@name='Excellent']"); else if i = 2 then BrowserRadioButtonSelect("//INPUT[@name='Good']"); end; end; end TMain; Hopefully this helps out. Regards Paul
↧
Wiki Page: How can BDL code be used to find the download times for individual page components, namely image files?
Whilst individual page component statistics can be viewed via the "Statistics" tab when the relevant .xlg file is loaded into TrueLog Explorer, you may wish to measure this data programmatically in your BDL code.. The functions WebPageStatGetRootNode and WebPageStatGetNodeData, introduced in SilkPerformer 7.1, are ideal. Please note that this is the recommended approach - it is technically incorrect to use "MeasureStart" / "MeasureStop" functions wrapped around a hardcoded WebPage call to an image file. benchmark SilkPerformerRecorder use "WebAPI.bdh" dcluser user VUser transactions TInit : begin; TWeb : 1; var dclrand dcltrans transaction TInit begin WebSetBrowser(WEB_BROWSER_MSIE6); WebModifyHttpHeader("Accept-Language", "en-gb"); // Use this option to enable the extended page statistics feature. // This allows for browsing of the page tree (as with TrueLog Explorer) // and retrieving similar data to that displayed on TrueLog Explorer's // Statistics tab. WebSetOption(WEB_OPT_DETAILED_PAGE_STAT, PAGE_STAT_FLAG_AllLoadedDocs); end TInit; transaction TWeb var // declare all variables nNode : number; nType : number; sUrl : string; nNodes : number; fValue : float; sTranName : string; begin // page to test WebPageUrl("http://www.yourtestsite.com/"); // use WebPageStatGetRootNode to identify node nNode := WebPageStatGetRootNode(); // count nodes nNodes := WebPageStatCountNodes(); // write count to output file Writeln("Number of Nodes:= "+String(nNodes)); // begin "while" loop nNode:=1; While nNode = nNodes Do // return statistics for each node component WebPageStatGetNodeInfo(nType,sUrl,Null, Null, Null, nNode); WebPageStatGetNodeData(STATFLAG_TimerDoc, fValue, nNode); Writeln("Url: " + sUrl); Writeln("Round-Trip Time: " + string(fValue)); nNode:=nNode+1; end; end TWeb; Old KB# 17463
↧
Forum Post: Testing Web Services in Remote Agents
Hello, I used Java Explorer to create a project to test web services. When executing in my machine, works fine, but, when using remote agents I´ve got: JavaUser-JavaProfile_1 10.0.209.35 29 00:00:01 Process Exit JavaCreateJavaVM Native: 1002 - Java Exception, Can't start JVM, reason: could not load jvm.dll, java home = "C:\Program Files (x86)\Java\jre6", jvm-dll = "C:\Program Files (x86)\Java\jre6\bin\client\jvm.dll", last error = 126 JavaUser-JavaProfile_1 10.0.209.35 29 00:00:01 Info *** Virtual user stopped *** Severe API error (SEVERITY_PROCESS_EXIT) JavaUser-JavaProfile_1 10.0.209.35 29 00:00:01 Info Processing Results Sending results... JavaUser-JavaProfile_1 10.0.209.35 29 00:00:01 Info Virtual user finished Virtual user was halted Anyone can help me with this? Thanks.
↧
Forum Post: RE: Testing Web Services in Remote Agents
Hi there. Sounds like the path might not exist on the Agent. If you need to specify s different path on the Agent (from the one on your Controller), within Silk Performer go to Settings | System | Java | Remote. Check the option to 'Use different settings for remote agents', specify the valid path from the Agent machine, then try the test again, Hope this helps. Best regards, Ciaran.
↧
↧
Forum Post: browser based performance testing
Hi, We have recently started to use silk performer 15.0 for performance testing. I had a few questions: Does silk performer do real browser emulations for doing performance testing? If yes can i rely on the page load time as the load time observed by an actual user? I am seeing differences between the loadeventend fired by the browser and load time reported for a single user test in silk performer. How can How does silk performer achieve browser parsing, performance improvements launched by browsers (like look ahead parsing), browser parallelism (loading many scripts/css/images at the same time), browser rules (block parsing while download javascript) and other nuances of different browsers. The reason why i am asking this is because when we evaluated silk performer against jmeter we realised that jmeter was sending requests sequentially which was totally off compared to the page load which we were observing. However when we ran silk performer we saw that it did achieve some degree of parallelism in execution of the page but the results of page load are still not tallying with what we see when compare the time against navigation timing apis given by the browser (navigationstart - loadeventend) time for the page load. If silk does not use real browser emulation then how do i rely on the performance numbers given out by silk performer? I am trying to understand when silk is reporting out time for pageload is it real the real page load time for the page.
↧
Forum Post: RE: browser based performance testing
Hi Does silk performer do real browser emulations for doing performance testing? The browser driven (ajax) project type uses the Silk Performer browser application which is a real browser instance - real rendering and real javascript execution. The normal web business transaction project type only emulates a browser. If yes can i rely on the page load time as the load time observed by an actual user? I am seeing differences between the loadeventend fired by the browser and load time reported for a single user test in silk performer. You need to ensure you are comparing like for like. Can you provide more info on how you recorded the script (browser driven or web business transaction) and what you are measuring against in the browser. How does silk performer achieve browser parsing, performance improvements launched by browsers (like look ahead parsing), browser parallelism (loading many scripts/css/images at the same time), browser rules (block parsing while download javascript) and other nuances of different browsers. The reason why i am asking this is because when we evaluated silk performer against jmeter we realised that jmeter was sending requests sequentially which was totally off compared to the page load which we were observing. However when we ran silk performer we saw that it did achieve some degree of parallelism in execution of the page but the results of page load are still not tallying with what we see when compare the time against navigation timing apis given by the browser (navigationstart - loadeventend) time for the page load. Again this really depends on the type of project you are using. If using browser driven then a real browser instance is used (its an Internet Explorer instance adapted with some modifications for our use) so for the most part the browser application behaves in the same was as a real browser does. This is why browser driven monitoring is so realistic. However its difficult for a user watching the page load to see what Silk performer is actually measuring. Here is the process flow diagram showing what Silk performer measures for page times in browser driven mode: AJAX Processing Flow: Under normal circumstances, everything covered in the Silktest AJAX synchronisation is what is being measured as a complete page time (split between page time and action time). Its difficult to look at any application in a browser and determine when the page is complete when ajax is involved because actions are taking place in the background which a user doesn't see. Silk Performers ajax synchronisation can tell when a page is truly complete. If silk does not use real browser emulation then how do i rely on the performance numbers given out by silk performer? I am trying to understand when silk is reporting out time for pageload is it real the real page load time for the page. For browser driven a real browser is being used. A lot of the information above assumes you are using browser driven load testing project type. if this is not the case please let me know and I can provide more specific information.
↧
Blog Post: Analysing Scripting Failures in Silk Performer - Part 1
After many years working in the Silk Performer support team the most common type of support question raised is probably how Silk Performer scripts can fail to implement the behaviour we expect. This series of articles explains some of the reasons scripts may fail and the advice we provide in our support role. 200 - Not ok When working within http/html web business transaction type projects and recording scripts at the http protocol level it’s important to note that the success or failure of a particular API call is determined by the http status code the server responds with, not whether the business logic was implemented. With this in mind it is very possible, and common, that your running script can result in an all green TrueLog file which indicates a successful replay; but on closer inspection the database wasn’t updated, or the record doesn’t exist or the item was not added to the basket. So what’s happening here? The simplest way to understand this is to understand what Silk Performer is actually doing when it runs a http protocol level script. We tend to think at a system level where we expect a certain number of inputs to have a specific output but Silk Performer is not that way inclined – it works on a request response basis with no overall understanding of the application under test or how it is supposed to operate. So if every application is different, and Silk Performer has no knowledge of application business logic, workflows or expected application output then how does it work at all? The answer lies in the http specification. The http specification is a set of defined and agreed upon rules which lay out how web applications which use the http protocol for communication should behave. So Silk Performer sends a request, and based on this requests data, and the http specification rules it also knows what the response should be. RFC 7231 defines that any basic http client should understand at least the 5 main classes of http response codes: 1xx, 2xx, 3xx, 4xx, 5xx. In a nutshell, your script can generate a completely successful replay from a server status code perspective whilst being completely unsuccessful from your business logic perspective. Take the example of an image file. Web pages are made up of many image files and when Silk Performer requests a web page from an application it parses the source code of the response body and also performs sub requests for all additional embedded content (assuming the use of contextful functions). It might look something like this: Request` Response GET /take2.gif HTTP/1.1 Referer: http://demo.borland.com/ Host: demo.borland.com Accept-Encoding: gzip, deflate User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.0) HTTP/1.1 200 OK Content-Type: image/gif Accept-Ranges: bytes Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET Content-Length: 626 All that’s really happening here is that Silk Performer is sending a request header for a file called take2.gif from a particular host and the host returns a response header which provides the 200 ok status code, meaning the request was successful (the response body or image would follow). However the issue is that Silk Performer is basing its own green successful status in TrueLog solely on the http status code 200 returned by the server, not on the files contents. It’s safe to assume that if an image is requested from the server and the server returns 200 status code then the image has been returned successfully however this is not the case when dealing with entire pages. Let’s look at another request: Request` Response GET /InsuranceWebExtJS/ HTTP/1.1 Host: demo.borland.com Connection: Keep-Alive Accept: */* Accept-Encoding: gzip, deflate User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.0) HTTP/1.1 200 OK Content-Type: text/html;charset=UTF-8 Content-Encoding: gzip Server: Microsoft-IIS/7.5 X-Powered-By: JSF/1.2 Content-Length: 2629 On this occasion the request is for a page, not a single image. Again the page has been returned and this is verified by a 200 status code but without checking the content of the page there is no way of knowing whether this is the correct page – whether it contains the content you expect – whether your business logic has been implemented. Its important to note at this stage that Silk Performer's Page Based Browser level API does contain some built in verifications based on web context management. This means it scripts context-full functions (WebPageLink, WebPageSubmitBin etc) which, by their design use the results of previous function calls. This is best illustrated by the following example from the Silk Performer help files: Example HTML code: a hfref=http://www4.company.com/store/5423/account Edit Account \a The above link contains load balancing information (www4) and a session id (5423) in the target URL. Assume the user clicks this link. This can be modeled in BDL using the WebPageUrl function: WebPageUrl("http://www4.company.com/store/5423/account"); The problem with this is that the dynamic components of the URL are hard-coded into the script. Alternatively, the WebPageLink function can be used: WebPageLink("Edit Account"); This solution is better because the Silk Performer replay engine will, using its HTML parser, use the actual URL that is associated with the link name Edit Account during replay. While it is still possible that a link name can change dynamically, it is far less likely than having a URL change. So in essence context-full functions do give Silk Performer some knowledge of what a response should contain which empowers the replay engine to look for certain things. For example if the page which contains this edit account link is returned from the server a 200 OK is received. However if the edit account link is not on the page the server will still respond 200 OK but the Silk Performer replay engine will throw an error to indicate that a link which was expected was not found. Now that we understand how a TryScript can show as successful when the script hasn’t performed the actions you expected you might wonder what can be done about that beyond the built in verifications of context management or manually verifying the response of every request. The answer is to add verifications to your script to check for key items you know will be on the page if the business logic has been successfully implemented. In the next section we will look at Understanding Session, Missing Header Information and Post Data
↧