Tuesday, February 22, 2011

Regression Testing (Black Box Software Testing)


Regression testing is a style of testing that focuses on retesting after changes are made. 

In traditional regression testing, we reuse the same tests (the regression tests). In risk-oriented regression testing, we test the same areas as before, but we use different (increasingly complex) tests. Traditional regression tests are often partially automated. These note focus on traditional regression.
Regression testing attempts to mitigate two risks:
  • A change that was intended to fix a bug failed.
  • Some change had a side effect, unfixing an old bug or introducing a new bug.
In addition, proponents of traditional regression testing argue that retesting is a measurement or control process, a means of assuring that the program is as stable as it was previously.
Regression testing approaches differ in their focus. Common examples include:
  • Bug regression: We retest a specific bug that has been allegedly fixed.
  • Old fix regression testing: We retest several old bugs that were fixed, to see if they are back. (This is the classical notion of regression: the program has regressed to a bad state.)
  • General functional regression: We retest the product broadly, including areas that worked before, to see whether more recent changes have destabilized working code.(This is the typical scope of automated regression testing.)
  • Conversion or port testing: The program is ported to a new platform and a subset of the regression test suite is run to determine whether the port was successful. (Here, the main changes of interest might be in the new platform, rather than the modified old code.)
  • Configuration testing: The program is run with a new device or on a new version of the operating system or in conjunction with a new application. This is like port testing except that the underlying code hasn't been changed--only the external components that the software under test must interact with.
  • Localization testing: The program is modified to present its user interface in a different language and/or following a different set of cultural rules. Localization testing may involve several old tests (some of which have been modified to take into account the new language) along with several new (non-regression) tests.
  • Smoke testing also known as build verification testing:A relatively small suite of tests is used to qualify a new build. Normally, the tester is asking whether any components are so obviously or badly broken that the build is not worth testing or some components are broken in obvious ways that suggest a corrupt build or some critical fixes that are the primary intent of the new build didn't work. The typical result of a failed smoke test is rejection of the build (testing of the build stops) not just a new set of bug reports.
Any test can be reused, and so any test can become a regression test. Regression testing naturally combines with all other test techniques. The essence of regression testing is exposure of problems that shouldn't be there, either because they were exterminated before or they weren't in the product the last time(s) it was tested.
The following examples illustrate the use of regression tests:

Dpkg Primer(Using Basic commands of dpkg)


Debian is one of the earliest Linux distribution around. It caught the public's fancy because of the ease of installing and uninstalling applications on it. When many other linux distributions were bogged down in dependency hell, Debian users were shielded from these problems owing to Debian's superior package handling capablities using apt-get.


All Linux distributions which claim their roots in the Debian distribution use this versatile package manager. For the uninitiated, Debian uses the deb package format for bundling together files belonging to an application. You can look at it as something like a setup installer (Eg: Installshield) in windows counterpart.


Here I will explain how to go about using this package handling utility to get the results that you desire.


The first step needed to use apt-get to your advantage is including the necessary repositories. Repositories are merely collections of softwares which are stored in a public location on the internet. By including the web address of these repositories, you are directing apt-get to search these locations for the desired software. You use the /etc/apt/sources.list file to list the addresses of the repositories. It takes the following format:
deb  [web address] [distribution name][maincontribnon-free]
For example, in Ubuntu a debian based distribution, it could be something like this:
deb http://in.archive.ubuntu.com/ubuntu breezy main restrcted
You can add any repository you like. apt-get.org contains an excellent collection of repositories to suite all tastes.

Once you have set the repositories, the next step is to sync the local software database with the database on the repositories. This will cache a copy of the list of all the remotely available softwares to your machine. This is achieved by running the following command:
# apt-get update
An advantage of this is you now have the power to search for a particular program to see if it is available for your version of distribution using the apt-cache command. And you don't need a net connection to do this. For example,
# apt-cache search baseutils
... will tell me if the package baseutils is available in the repository or not by searching the locally cached copy of the database.

Once you have figured that the package (in our case baseutils) is available, then installing it is as simple as running the following command:
# apt-get install baseutils
The real power of apt-get is realised now. If the baseutils package depends on the availability of a version of the library say, "xyz1.5.6.so". Then apt-get will download the library (or package containing the library) from the net and install it before installing baseutils package. This is known as automatic dependency resolution.

And removing a package is as simple as running the command:
# apt-get remove baseutils
Get statistics about the packages available in the repositories by running the command :
# apt-cache stats
Total package names : 22502 (900k)
Normal packages: 17632
Pure virtual packages: 281
Single virtual packages: 1048
Mixed virtual packages: 172
Missing: 3369
...
To upgrade all the softwares on your system to the latest versions, do the following:
# apt-get upgrade
And finally the king of them all - upgrading the whole distribution to a new version can be done with the command:
# apt-get dist-upgrade
Saving valuable hard disk space
Each time you install an application using apt-get, the package is actually cached in a location on your hard disk. It is usually stored in the location /var/cache/apt/archives/ . Over a period of time, all the cached packages will eat up your valuable hard disk space. You can clear the cache and release hard disk space by using the following command:
# apt-get clean
You could also use autoclean where in, only those packages in the cache which are found useless or partially complete are deleted.
# apt-get autoclean
dpkg - The low level Package management utility
As I said earlier, Debian based distributions use the Deb package format. Usually normal users like you and me are shielded from handling individual deb packages. But if you fall into a situation where you have to install a deb package you use the dpkg utility.
Lets assume I have a deb package called gedit-2.12.1.deb and I want to install it on my machine. I do it using the following command:
# dpkg -i gedit-2.12.1.deb
To remove an installed package, run the command:
# dpkg -r gedit
The main thing to note above is I have used only the name of the program and not the version number while removing the software.
You may also use the --purge (-P) flag for removing software.
# dpkg -P gedit
This will remove gedit along with all its configuration files. Where as -r (--remove) does not delete the configuration files.

Now lets say I do not want to actually install a package but want to see the contents of a Deb package. This can be achieved using the -c flag:
# dpkg -c gedit-2.12.1.deb
To get more information about a package like the authors name,the year in which it was compiled and a short description of its use, you use the -I flag:
# dpkg -I gedit-2.12.1.deb
You can even use wild cards to list the packages on your machine. For example, to see all the gcc packages on your machine, do the following:
# dpkg -l gcc*

Desired=Unknown/Install/Remove/Purge/Hold
Status=Not/Installed/Config-files/Unpacked/Failed-config/.
/ Err?=(none)/Hold/Reinst-required/X=both-problems
/ Name            Version        Description
+++-===============-==============-========================
ii  gcc             4.0.1-3        The GNU C compiler
ii  gcc-3.3-base    3.3.6-8ubuntu1 The GNU Compiler Colletio
un  gcc-3.5         none          (no description available)
un  gcc-3.5-base    none          (no description available)
un  gcc-3.5-doc     none          (no description available)
ii  gcc-4.0         4.0.1-4ubuntu9 The GNU C compiler
...
In the above listing, the first 'i' denotes desired state which is install. The second 'i' denotes the actual state ie gcc is installed. The third column gives the error problems if any. The fourth, fifth and sixth column gives the name, version and description of the packages respectively. And gcc-3.5 is not installed on my machine. So the status is given as 'un' which is unknown not-installed.

To check if an individual package is installed, you use the status -s flag:
# dpkg -s gedit
Two days back, I installed beagle (a real time search tool based on Mono) on my machine. But I didn't have a clue about the location of the files as well as what files were installed along with beagle. That was when I used the -L option to get a list of all the files installed by the beagle package.
# dpkg -L beagle
Even better, you can combine the above command with grep to get a listing of all the html documentation of beagle.
# dpkg -L beagle | grep html$
These are just a small sample of the options you can use with dpkg utility. To know more about this tool, check its man page.
If you are alergic to excessive command line activities, then you may also use dselect which is a curses based menu driven front-end to the low level dpkg utility.
dpkg -S | --search filename-search-pattern ...
Search for a filename from installed packages. All standard shell wildchars can be used in the pattern.


dpkg -p|--print-avail package
Display details about package, as found in /var/lib/dpkg/available.


dpkg --update-avail | --merge-avail Packages-file
Update dpkg's and dselect's idea of which packages are available. With action --merge-avail, old information is combined with information from Packages-file. With action --update-avail, old information is replaced with the information in the Packages-file. The Packages-file distributed with Debian GNU/Linux is simply named Packages. dpkg keeps its record of available packages in /var/lib/dpkg/available.


dpkg -A | --record-avail package_file ...
Update dpkg and dselect's idea of which packages are available with information from the package package_file. If --recursive or -R option is specified, package_file must refer to a directory instead.


dpkg -l | --list package-name-pattern ...
List packages matching given pattern. If no package-name-pattern is given, list all packages in /var/lib/dpkg/available. Normal shell wildchars are allowed in package-name-pattern. (You will probably have to quote package-name-pattern to prevent the shell from performing filename expansion. For example, dpkg -l 'libc5*' will list all the package names starting with "libc5".)


dpkg -s | --status package-name ...
Report status of specified package. This just displays the entry in the installed package status database.



Get a list of everything you've installed?
dpkg -l '*'


List each available package whose name matches thunderbird.
# apt-cache pkgnames | grep thunderbird


"dpkg --force-help" is your friend.

Monday, February 14, 2011

Get to Know /etc/init.d Directory -- /etc/init.d/command {start|stop|restart|force-reload}


If you use Linux you most likely have heard of the init.d directory. But what exactly does this directory do? It ultimately does one thing but it does that one thing for your entire system, so init.d is very important. The init.d directory contains a number of start/stop scripts for various services on your system. Everything from acpid to x11-common is controlled from this directory. Of course it’s not exactly that simple.
If you look at the /etc directory you will find directories that are in the form rc#.d (Where # is a number reflects a specific initialization level – from 0 to 6). Within each of these directories is a number of other scripts that control processes. These scripts will either begin with a “K” or an “S”. All “K” scripts are run before “S” scripts. And depending upon where the scripts are located will determine when the scripts initiate. Between the directories the system services work together like a well-oiled machine. But there are times when you need to start or stop a process cleanly and without using the kill or killall commands. That is where the /etc/init.d directory comes in handy.
Now if you are using a distribution like Fedora you might find this directory in /etc/rc.d/init.d. Regardless of location, it serves the same purpose.
In order to control any of the scripts in init.d manually you have to have root (or sudo) access. Each script will be run as a command and the structure of the command will look like:
/etc/initi.d/command OPTION  --- [OPTION = {start|stop|restart|force-reload]
Where command is the actual command to run and OPTION can be one of the following:
start
stop
reload
restart
force-reload
Most often you will use either start, stop, or restart. So if you want to stop your network you can issue the command:
/etc/init.d/networking stop
Or if you make a change to your network and need to restart it, you could do so with the following command:
/etc/init.d/networking restart
Some of the more common init scripts in this directory are:
networking
samba
apache2
ftpd
sshd
dovecot
mysql
Of course there may be more often-used scripts in your directory – it depends upon what you have installed. The above list was taken from a Ubuntu Server 8.10 installation so a standard desktop installation would have a few less networking-type scripts.
But what about /etc/rc.local
There is a third option that I used to use quite a bit. This option is the /etc/rc.local script. This file runs after all other init level scripts have run, so it’s safe to put various commands that you want to have issued upon startup. Many times I will place mounting instructions for things like nfs in this script. This is also a good place to place “troubleshooting” scripts in. For instance, once I had a machine that, for some reason, samba seemed to not want to start. Even afer checking to make sure the Samba daemon was setup to initialize at boot up. So instead of spending all of my time up front with this I simply placed the line:
/etc/init.d/samba start
in the /etc/rc.local script and Samba worked like a charm. Eventually I would come back and trouble shoot this issue.
Linux is flexible. Linux is so flexible there is almost, inevitably, numerous ways to solve a single problem. Starting a system service is one such issue. With the help of the /etc/init.d system (as well as /etc/rc.local) you can pretty much rest assured your service will start.
chkconfig --  chkconfig  allows the checking of the startup level of services and gives the ability to set the service. Running ‘chkconfig’ will setup the scripts in the needed “rcX.d” directory.
/sbin/service “stop|start|restart|etc”
this command is the equiv of typing “/etc/init.d/ command   “stop|start|restart|etc”
In order to whether a particular process is running or not type  ps -fu root | grep daemonname
ps -fu root | grep cupsd
sudo /etc/init.d/cups stop
If cups  daemon is not running you'll get the appropriate message from this command.