Showing posts with label SQA. Show all posts
Showing posts with label SQA. Show all posts

Monday, June 15, 2015

Performance Testing & Load Runner

Performance Testing:

Performance testing is the process of determining the responsiveness or effectiveness or scalability and stability of software or device to determine how the components of a system are performing with given a particular state.

It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.

It includes: Load Testing, Stress Testing, capacity testing, volume testing, endurance testing, spike testing, scalability testing and reliability testing etc.
This type of testing does not give pass or fail result, it performed to get benchmark & standard of the application against Concurrency / Throughput, Server response time, Latency, Render response time etc. for responsiveness, speed, scalability and stability characteristics.

Load Runner has 4 key components

1. Virtual User Generator (VUGEN)
VUGEN is used for generating and editing scripts
It has four components
1)    Record
2)    Run
3)    Debug
4)    Design

2. Controller
Controller is where we apply the load. The Controller systematizes drives, manages and monitors the load test.
3. Load Generator
Load Generate is local CPU which generates the load by running virtual users
4. Analysis
Analysis accumulates logs from various load generators and present reports for visualization of run result data & monitoring data
Load Runners shows error details as below.
1) Syntax error: [ , ; : “ ) ] etc.
2) Runtime error: Spelling error.
3) Compiling error:

Advance Components

·         Parameterization
Replacing the recorded value with a script value.The value that you are replacing has to be in the database
          Parameterization helps
1.    Reducing script size
2.    Avoiding cache effect

·         Correlation
Using “Correlation” we can capture the dynamically generated value from the server using nested queries.

·         Load Balancing
How to increase/decrease the load

·         Think time
Think time is the time that a real user waits/ pause between actions.

·         Rendezvous Point

Rendezvous Points means “meeting points”. This option used to check the peak load from off peak load. Virtual users instructed/configured to wait during test execution for multiple Vusers to arrive at a certain point before execution

Monday, June 14, 2010

to all my friends ..





to all my friends .




Dost ki yaad se badi koi dolat nahi hoti,
Sath rehna hi dosti ki zarurat nahi hoti,
Duriya kar deti he yaado ko zinda
Warna yaado ki koi kimat nahi hoti :)

Wednesday, June 9, 2010

Software QA

Software QA

About ‘Software Quality Assurance'
Software QA involves the entire software development

PROCESS - It is oriented to 'prevention'.

Process is about monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with.

 

About ‘Software Testing'
Software Testing involves operation of a system or application under controlled conditions and evaluating the results

TESTING- It is oriented to 'detection'

Controlled conditions should include both normal and abnormal conditions. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should.

QA and testing is a combined responsibility one group or individual. Organizations (Base on the Projects) are very considerably in how they assign responsibility. Also common are project teams that include a mix of testers and developers who work closely together, with overall QA processes monitored by project managers. It will depend on what best fits an organization's size and business structure.

 

Automation Testing vs Manual Testing


The general rule of thumb has always been to use common sense. If you’re only going to run the test one or two times or the test is really expensive to automate, it is most likely a manual test. But then again, what good is saying “use common sense” when you need to come up with deterministic set of guidelines on how and when to automate?

Pros of Automation:

If one has to run a set of tests repeatedly, automation is a huge win.
It gives one the ability to run automation against code that frequently changes to catch regressions in a timely manner
It gives one the ability to run automation in mainstream scenarios to catch regressions in a timely manner.
Aids in testing a large test matrix (different languages on different OS platforms).
Automated tests can be run at the same time on different machines, whereas the manual tests would have to be run sequentially.

Cons of Automation:
It costs more to automate. Writing the test cases and writing or configuring the automate framework one are using costs more initially than running the test manually.
Can’t automate visual references, for example, if you can’t tell the font color via code or the automation tool, it is a manual test.

Pros of Manual:

If the test case only runs twice a coding milestone, it most likely should be a manual test. Less cost than automating it.
It allows the tester to perform more ad-hoc (random testing).In my experiences, more bugs are found via ad-hoc than via automation. And, the more time a tester spends playing with the feature, the greater the odds of finding real user bugs.

Cons of Manual:

Running tests manually can be very time consuming
Each time there is a new build, the tester must rerun all required tests - which after a while would become very mundane and tiresome. 



Tuesday, June 8, 2010

We Test For Quality But Who Cares?

Nicely Written by eric Jacobson

The most challenging presentation I saw at Stareast was by Google Senior Test Engineer, Goranka Bjedov. She makes the case that the world is heading toward developing software without testing for quality and that this practice may not be a bad thing. Scary but true!
First, Goranka pigeon-holed testing into two categories; productivity and quality. Her definitions (per my notes) are as follows:
Productivity Testing – Making sure programmers don’t break code (e.g., unit tests). Testing things consumed by machines. Anything consumed by machines is easy to automate. These tests are cheap, fast, well-defined. The problems failed tests expose do not require deep analysis.
Quality Testing – Testing things consumed by humans. Anything consumed by humans is not easy to automate and therefore difficult to test. Expensive. Tests become more flaky as the system becomes more complex. The right tests are not clear. Failed tests require deep analysis. These tests take longer.
With the promise of quicker software delivery, productivity testing has become more important than quality testing. Wake up, the world is already adapting in several ways.
For example, at Google, they know hardware and infrastructure will always fail. Instead of wasting time with exhaustive tests, their solution is to manage risk (e.g., build in seamless failovers and backups) and shield the user from the failures.
Goranka also countered that in cases where poor quality is seemingly not optional (e.g., medical software) users have already adapted by not relying on it. She claims users in hospitals, for example, know not to trust someone’s life in a piece of software. Instead, they monitor the patient as a human and understand that software is fallible.
These are excellent points, IMO, and I would have been satisfied contemplating a future where my job no longer existed...but hold on!
Goranka asked us to do a little exercise. She asked us to determine the rule used to generate these three sequences by writing five additional sequences of our own:
-25, -5, 15, 35, …
2, 4, 6, 8, …
0, 3, 6, 9, …
I don't want to give away her rule, but you can still try it on your own.
After surveying the audience, she pointed out that developers tend to write confirmatory tests more than testers, who tend to write more negative tests. Thus, perhaps testers do play an important role. She also questioned how much productivity tests actually tell us about the system as a whole. Her answer? …they tell us nothing.
In the end, she left us with this thought…
If you think (non-programmer) testers are important, you better start doing something about it.

Nicely Written by eric Jacobson

Saturday, June 27, 2009

QA - Quality Assurance and Software Testing




What is Software Quality Assurance?
Quality Assurance makes sure the project will be completed based on the previously agreed specifications, standards and functionality required without defects and possible problems. It monitors and tries to improve the development process from the beginning of the project to ensure this. It is oriented to "prevention".
When should QA testing start in a project? Why?
QA is involved in the project from the beginning. This helps the teams communicate and understand the problems and concerns, also gives time to set up the testing environment and configuration. On the other hand, actual testing starts after the test plans are written, reviewed and approved based on the design documentation.
What is Software Testing?
Software testing is oriented to "detection". It's examining a system or an application under controlled conditions. It's intentionally making things go wrong when they should not and things happen when they should not.
What is Software Quality?
Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable.
What is Verification and Validation?
Verification is preventing mechanism to detect possible failures before the testing begin. It involves reviews, meetings, evaluating documents, plans, code, inpections, specifications etc. Validation occurs after verification and it's the actual testing to find defects against the functionality or the specifications.
What is Test Plan?
Test Plan is a document that describes the objectives, scope, approach, and focus of a software testing effort.
What is Test Case?
A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.
What is Good Code?
Good code is code that works according to the requirements, bug free, readable, expandable in the future and easily maintainable.
What is Good Design?
In good design, the overall structure is clear, understandable, easily modifiable, and maintainable. Works correctly when implemented and functionality can be traced back to customer and end-user requirements.
Who is Good Test Engineer?
Good test engineer has the ability to think the unthinkable, has the test to break attitute, strong desire to quality and attention to detail.
What is Walkthrough?
Walkthrough is quick and informal meeting for evaluation purposes.
What is Software Life Cycle?
The Software Life Cycle begins when an application is first conceived and ends when it is no longer in use. It includes aspects such as initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, retesting, phase-out, and other aspects.
What is Inspection?
The purpose of inspection is trying to find defects and problems mostly in documents such as test plans, specifications, test cases, coding etc. It helps to find the problems and report it but not to fix it. It is one of the most cost effective methods of software quality. Many people can join the inspections but normally one moderator, one reader and one note taker are mandatory.
What are the benefits of Automated Testing?
It's very valuable for long term and on going projects. You can automize some or all of the tests which needs to be run from time to time repeatedly or diffucult to test manually. It saves time and effort, also makes testing possible out of working hours and nights. They can be used by different people and many times in the future. By this way, you also standardize the testing process and you can depend on the results.
What do you imagine are the main problems of working in a geographically distributed team?
The main problem is the communication. To know the team members, sharing as much information as possible whenever you need is very valuable to solve the problems and concerns. On the other hand, increasing the wired communication as much as possible, seting up meetings help to reduce the miscommunication problems.
What are the common problems in Software Development Process?
Poor requirements, unrealistic schedule, inadequate testing, miscommunication and additional requirement changes after development begin.

Friday, June 26, 2009

Link roundup for Visual Studio Team System 2010 Test Edition



I've been playing around with Microsoft Visual Studio 2010 Team System (Beta 1) the last few weeks and I have to say that I'm pretty excited about what Microsoft is doing to help tie development, testing, and environments together. The things that stands out the most to me is the "Test and Lab Manager". This tool allows me to write manual tests, automate tests, and then configure, control and run those tests in specified physical or virtual environment. Although beta 1 is pretty rough around the edges, what I'm seeing is really exciting. Through my playing around and research I've gathered a few links full of information, screenshots, demos, videos, and official documentation. Peruse and enjoy, but before you get started, go get a rag so that you can clean the drool off of the side of your mouth when you're done.

MSDN documentation for "Testing the Application" in VSTS 2010:
http://msdn.microsoft.com/en-us/library/ms182409(VS.100).aspx

Video: Functional UI Testing with VSTS 2010
http://channel9.msdn.com/shows/10-4/10-4-Episode-18-Functional-UI-Testing/

How to add a VSTS 2010 coded UI test to a build:
http://blogs.msdn.com/mathew_aniyan/archive/2009/05/26/coded-ui-test-in-a-team-build.aspx

Creating and running a VSTS 2010 coded UI test through a Lab Manager project:
http://blogs.msdn.com/jasonz/archive/2009/05/26/vs2010-tutorial-testing-tutorial-step-2.aspx
http://blogs.msdn.com/mathew_aniyan/archive/2009/05/26/coded-ui-test-from-microsoft-test-lab-manager.aspx

Explanation of the various Test tool names and products:
http://blogs.msdn.com/jasonz/archive/2009/05/12/announcing-microsoft-test-and-lab-manager.aspx

VSTS related blogs:
http://blogs.msdn.com/vstsqualitytools/
http://blogs.msdn.com/amit_chatterjee/
http://blogs.msdn.com/mathew_aniyan/


Do Loop Until 0: Peek in the 'black box'

Do Loop Until 0: Agile - Big A or little a?

Do Loop Until 0: Pixels - That's Not a Defect

Do Loop Until 0: Integration – Not My Bug