How are Exceptions Handled in Selenium ?

How are Exceptions Handled in Selenium ?: catch, Difference between Throws and Throw, Exception Handling, Exception Handling in Selenium, Finally, How are Exceptions Handled in Selenium ?, NoSuchElementException, Throw, Throws, try, Exceptions in Selenium, Exception Handling , Exceptions, try/catch block

What is the difference between a Class and an Object ?

A class is a concept used in object oriented programming languages such as C++, PHP, and JAVA etc. Apart from holding data, a class is also used to hold functions.

An object is an instant of a class. A class can hold data and/or functions.

The term ‘class’ refers to the actual written piece of code which is used to define the behavior of any given class. So, a class is a static piece of code that consists of attributes which don’t change during the execution of a program – like the method definitions within a class.

The term ‘object’, however, refers to an actual instance of a class. Every object must belong to a class. Objects are created and eventually destroyed – so they only live in the program for a limited time. While objects are ‘living’ their properties may also be changed significantly.

Objects have a lifespan but classes do not

Class is a general concept (like an Animal), an object is a very specific embodiment of that class, with a limited lifespan (like a lion, cat, or a zebra). Another way of thinking about the difference between a class and an object is that a class provides a template for something more specific that the programmer has to define, which he/she will do when creating an object of that class.

Class is a template. When a class is declared, there is no memory allocation but when the object of that class is declared, memory is allocated.

The keyword “class” is used to declare a class.

An object is dependent upon a class as it can be created only if the class is already declared. Therefore an object is referred to as instance of a class. Objects and classes are related to each other.

Class is declared using class keyword
e.g. class Student{}

Object is created through new keyword mainly e.g.
Student s1=new Student();

Class is declared once.Class doesn't allocate memory when it is created.
Object is created many times as per requirement. Object allocates memory when it is created.

An object is defined as any entity that can be used. Object can be a variable, value, data structure or a function/method. In real world, the objects are your Car, Bus, Table etc.

Class is a blueprint or template from which objects are created. Class is a group of similar objects.
Class is a logical entity.

Class is blueprint means you can create different object based on one class which varies in there property. e.g. if
Car is a class than Mercedes, BMW or Audi can be considered as object because they are essentially a car but have different size, shape, color and feature.

Object is an instance of a class. Object is a real world entity such as pen, laptop, mobile, bed, keyboard, mouse, chair etc.Object is a physical entity.

Encapsulation : Methods are used to access the objects of a class. All the interaction is done through the object’s methods. This is known as data encapsulation. The objects are also used for data or code hiding.

Class  is Type, Object is Variable.

Class in Java contains both state and behavior, state is represented by field in class e.g.numberOfGears, whether car is automatic or manual, car is running or stopped etc. On the other hand behavior is controlled by functions, also known as methods in Java e.g. start() will change state of car from stopped to started or running and stop()will do opposite.


A class is nothing but a blueprint or a template for creating different objects which defines its properties and behaviors. Java class objects exhibit the properties and behaviors defined by its class. A class can contain fields and methods to describe the behavior of an object.

Class − A class can be defined as a template/blueprint that describes the behavior/state that the object of its type support.

Object − Objects have states and behaviors. Example: A dog has states - color, name, breed as well as behaviors – wagging the tail, barking, eating. An object is an instance of a class.

Class is a definition, while object is a instance of the class created. Class is a blue print while objects are actual objects existing in real world. Example we have class CAR which has attributes and methods like Speed, Brakes, Type of Car etc.

In object-oriented programming , a class is a template definition of the method s and variable s in a particular kind of object . Thus, an object is a specific instance of a class; it contains real values instead of variables. The class is one of the defining ideas of object-oriented programming.

In object-oriented programming, a class is an extensible program-code-template for creating objects, providing initial values for state (member variables) and implementations of behavior (member functions or methods).

How to do mobile test automation ?

Top notch app testing is crucial. However, it is difficult since device fragmentation is a major issue with thousands of Android devices on the market. Even iOS is not entirely immune from this issue. Between all of the different “iDevices” currently supported, there are over 30 different iOS device models on the market today. Throw in multiple languages for different countries and you can see how the test matrix is daunting to say the least.
Taking a multi-pronged approach to testing helps to make the overall process more manageable, thorough, and successful. Take a look at three key steps to addressing the challenge of delivering high quality in a mobile world, as well as a new Pivotal Cloud Foundry microservice to help streamline the process of distributing test builds.

Test Driven Development


Testing early and often is critical for creating high quality apps. This helps catch bugs early in the development process, fixing minor issues before they turn into expensive problems and require tedious debugging later. A great way to achieve this is with Test Driven Development (TDD), one of the key concepts within the framework of agile development methodologies.
TDD is a reversal of the traditional approach to software development in which you write code, then write the automated tests for that code. TDD instead involves initially writing an automated test for a given feature, then coding the feature itself and ensuring that it passes the test.
This methodology has been shown to not only produce higher quality software but increase programmer productivity as well. It ensures that all code is covered by at least one test, which raises the baseline of quality. Beyond that, it encourages developers to focus on actual use cases throughout the development process and can reduce extraneous code. The net result is better quality software, delivered faster.


Automated Testing


With the myriad devices on the market today, you’ll want to automate as much as possible. This involves executing test scripts on devices, and it is particularly useful for regression testing and smoke tests (simple tests to ensure the app functions at a very basic level) because these are generally repetitive and time consuming to do manually.
There are many tools available for automated testing and several of them require the use of rooted or jail broken devices. We recommend avoiding those since rooting phones voids warranties, opens security holes, and even changes the behavior of the device being tested. Don’t waste time testing on devices that are not set up for their legally intended use.
Automated testing saves significant time and money by allowing you to run tests on many devices simultaneously and quickly. Since machines never miss a step, leveraging automation reduces risk of human error throughout the testing process.

 

User Testing and Automated Distribution


Despite the advantages of automation, it doesn’t address everything. User testing is still required in order to cover the unexpected aspects of human behavior that happen in real life. It is also critical to obtain valuable insights about how people actually use your app.
Once you’ve completed the automated tests, it’s time to put your app in the hands of real people, starting with the QA team and continuing with test user groups. You’ll want to stick to this “order of operations” in order to catch and fix the most basic issues before bringing your app in front of your most valuable testing audience.
Getting a pre-release mobile app in the hands (or pockets, as it were) of users can be a painful process. It often involves emailing a file, dragging/dropping it into a desktop application, and syncing it via cable to the device. This can especially be a nuisance for typical non-tech savvy users who don’t do it every day.
The challenges with user testing process don’t stop there. Keeping tabs on test user groups for different apps and making sure everyone has the correct versions of the correct apps is another burden for mobile development and QA teams. Using a platform to manage the distribution and version management of test builds will streamline this process and reduce frustration. Ultimately, an effective process is necessary to scale your user testing practice for the multiple apps and frequent app updates that define a best-in-class mobile testing strategy.
The new App Distribution for Pivotal Cloud Foundry simplifies this process by providing an easy, intuitive way for users to do over-the-air (OTA) installs of pre-release apps with the tap of a button. It handles device registration, user and group management, distribution of apps, notification of new available app versions and more. It supports all of the major mobile platforms and runs on Pivotal Cloud Foundry, so enterprises can deploy it in a private cloud, on premises for full control and privacy. This makes life a lot easier for users testing the apps, as well as the mobile development and QA teams releasing apps. Also  check out Useful Browser Extensions and Web Services for QA Testing.
Following these three steps diligently will lead to higher quality apps, better user engagement, and stronger app ratings. Moreover, it will help your organization achieve overall success in your mobile app efforts in a world where mobile is increasingly the primary way customers, employees, and business partners work and interact.

Software Testing - An Easy Way: Why is everyone Obsessed with BIG Data ?

Software Testing - An Easy Way: Why is everyone Obsessed with BIG Data ?: What is Big Data? Big data is data that exceeds the processing capacity of conventional database systems. The data is too bi...

Software Testing - An Easy Way: What is the difference between Automation testing and Manual testing ?

Software Testing - An Easy Way: What is the difference between Automation testing and Manual testing ?

What is Appium? How it is used to test Mobile Apps?

Appium is a free open-source test automation framework for mobile testing. It is a wrapper that translates Selenium commands into iOS and Android commands.

Appium framework can use any language that Selenium WebDriver supports (Java, Python, C#, Ruby, JavaScript, PHP, etc.) – there’s no need to worry about what the Appium Server supports. There are plenty of client libraries ready to assist you and tailored to each language. The Appium server communicates with a standardized JSON over HTTP protocol, so the server and devices and your test script (local machine) can be run on separate machines.

You can write your tests against iOS and Android platforms using the same API, enabling code reuse between test suites. But you still need separate iOS and Android scripts, because the UI elements are different on the two platforms.

  • Appium is mobile web, native and hybrid software application test automation tool.
  • It is open-source software automation tool which is useful to automate android and IOS platform apps.
  • Most important thing is : Appium is “cross-platform” automation tool so you can write software automation tests against iOS and Android (multiple platforms) using same API.
  • cross-platform” enables you large amount of code reuse between iOS and Android test suites.
  • Appium support software test automation on emulator or simulator and physical mobile devices too.
  • Appium is developed on few key philosophy points : 1. Should not have to recompile your app to automate it. 2. Should not be locked into a specific language or framework. 3. Should be open source and 4. Should not reinvent the wheel when it comes to automation APIs.

If you are mobile app software test engineer, Appium can makes your mobile app regression testing task easy. Specially for large mobile apps where continues apps are updating with new features and functionalities. Another main benefit of using appium for mobile app automation is : It supports bellow given multiple platforms and languages. Also you can use any testing framework.

Multiple Platforms Support

Appium support bellow given different platforms.

  • Android
  • IOS
  • FirefoxOS

Multiple Languages Support

Appium supports bellow given languages with theSelenium WebDriver API and language-specific client libraries.

  • Java
  • Objective-C
  • JavaScript with Node.js
  • PHP
  • Python
  • Ruby
  • C#
  • Clojure
  • Perl

Also there are other advantages like no source code is needed to test app as you can test it directly, also you can engage built in apps like camera, calendar etc in your test script if required.

What is database testing?

Database testing is Technique of checking the data which gets inserted/updated/deleted in various databases. Database testing differs based on the type of the data you are verifying.
Based on the OLTP or OLAP the method of database testing will differ. For OLTP (Online Transaction Processing) you are checking the data is inserted/deleted/modified properly or not.
Database testing is a kind of software testing where you make sure that what you enter on web forms that information is being saved in to database as expected. Data constraints are maintained while doing migration or upgradation of your web application.


  1. Checking whether the data submission you made through an UI application is stored in the database. This is referred as Database testing.
  2. Data warehouse testing : Feed files from an UI application are fed into a system and data will be sent to different layers after applying business rules using ETL tools like Informatica. QA team will provide complete testing from file level to the table level using Unix scripts and SQL queries.
  3. Data base testing generally deals with the following:
    1. Verifying the integrity of UI data with Database Data
    2. Verifying  whether any junk data is displaying in UI other than that stored in database.
    3. Verifying  execution of stored procedures with the input values taken from the database tables.
    4. Verifying  Data Migration.
    5. Data validity testing.
    6. Data Integrity testing
    7. Performance related to data base.
    8. Testing of Procedure,triggers and functions.
  4. For data validity testing you should have knowledge of SQL
  5. For data integrity testing you should know about referential integrity and different constraint.
  6. For performance related things you should have idea about the table structure and design.
Database testing involves following four activities:
  1. Validating the mapping of data from front end(UI) to back end(DB/Table).
  2. Validating the Data Integrity.( ensure that data are same in all related tables)
  3. Validating ACID(Atomicity, Consistency, Isolation, Durability property with all transactions.
  4. Validating Constraints and performance of database.
With database testing, you can check the overall health and stability of databases, stored as master data as well as procedures and business logic to ensure quality performance and continuous contribution to key business processes.
  1. Diagnosis of specific database on server
  2. Industry standard benchmarks testing of databases
  3. Managing and governing database resources and their utilization

Database Comparison Testing

You can compare the two different data sets to verify the integrity of the data and ensure accurate reporting.
  1. Run queries to look up if the data has been processed correctly or not
  2. Detailed drill-down information for database testing errors and data divergence
  3. Data mirroring with different data versions

Database Validation

You can authenticate various databases and their quality for further usage and analysis.
  1. Validation of database server configuration
  2. Verification of database server load
  3. Determining and authentication of database end users
Database testing, if done using automated testing tools like TestingWhiz, you can
  1. Streamline and accelerate data validation
  2. Lesser data anomalies due to early detection of defects
  3. Better re-use of test components, reducing time-to-market and simplifying the test management process
Checking relationships, keys, procedures,Queries (almost 1 page long queries used in backend applications) logic check, triggers, schema of tables, validation of tables,checking name convention, optimize parameters in procedure, checking views, checking temp tables.




  • What is the difference between Automation testing and Manual testing ?


    Points of Difference Between Manual and Automation Testing. Manual testing and automation testing can be considered to be the pillars of software testing. They form the core of this field. Testing is carried out with a combination of both automation and manual. To be an expert in either of these or both, one would have to undergo software testing in Pune.
    For now, we will look at the points of difference between these two. This would give you a fair bit of idea, regarding both these.

    Manual testing

    • It is carried out manually where a tester himself executes all the steps e.g. in a test case.
    • Manual testing is the most basic step. Without carrying out manual testing, one cannot proceed towards automation testing.
    • In this kind of testing, testers carry out random testing in order to discover bugs.
      • Due to error guessing technique followed in manual testing, we get more bugs in general than that of automation testing.
      • Time consumed is more in this case.
      • Manual testing is generally carried out in a sequential manner.
      • Carrying out regression testing is tedious in manual testing.
      • More testers are required in Manual Testing on the grounds that in this test cases should be executed manually.
      • The results are with lesser accuracy as manual errors come into picture.
      • Batch testing cannot be performed over here.
      • The reliability of manual testing is considered to be less.
      • Programming is not involved in manual testing.
      • Manual testing is considered to be of lower quality.
      • Manual testing can be carried out without the use of any tool.
      • All the well known stages of STLC like test planning, test deployment, result investigation, test execution, bug tracking and reporting tools clearly come under the class of Manual Testing and done effectively by human endeavors.

    Automation testing

      • It is carried out with the help of various automation tools like QTP, Selenium etc. There are many available in the market. Based on factors like requirement and budget, one needs to chose one.
      • Automation testing can be said to be an integral and continuous part of manual testing.
      • In case of automation testing, we do the testing of the application via running the scripts. Tools allow us to write scripts and execute them as well.
      • Automation testing is more useful when repetitive functionalities of the software are to be tested.
      • Time consumed is less as expected.
      • Automation testing is carried out on a number of machines at one time.
      • Regression testing is easier to carry out during automation testing due to the use of tools.
      • Fewer testers are required in Automation Testing on the grounds that in this test cases should be executed by utilizing Automation Tools.
      • Results are highly accurate as manual errors are out of question over here.
      • Multiple kinds of batch testing can be carried out in case of automation testing.
      • Automation testing is considered to be more reliable amongst the two as there is involvement of tools in the same.
      • Programming is the heart of automation testing as scripts need to be written in various languages like Perl, python etc.
      • Automation testing is considered to be of higher quality.
      • Tools form an integral part of automation testing.
      • In Automation Testing all the popular stages of STLC are finished by different open sources and paid tools like Selenium, J meter, QTP, Load Runner, Win Runner etc.

    Why Automation Testing?

    In software testing, test automation is the use of special software (separate from the software being tested) to control the execution of tests and the comparison of actual outcomes with predicted outcomes.


    Reason:
    Salary trend in recent years:






    What is ETL testing ?

     
    ETL stands for Extract-Transform-Load and it is a process of how data is loaded from the source system to the target System.  
    For example, there is a retail store which has different departments like sales, marketing, logistics etc. Each of them is handling the customer information independently, and the way they store that data is quite different. The sales department have stored it by customer’s name, while marketing department by customer id.
    Now if they want to check the history of the customer and want to know what the different products he/she bought owing to different marketing campaigns; it would be very tedious.
    The solution is to use a Data warehouse to store information from different sources in a uniform structure using ETL.
    The following diagram gives you the ROAD MAP of the ETL process.
     
    ETL Testing : It is a process to validate the data in the source and the target system based on the business requirement.
    ETL Testing Process:
    Similar to other Testing Process, ETL also go through different phases. The different phases of ETL testing process is as follows : 

    ETL Testing is performed in five stages:
    1. Identifying data sources and requirements.
    2. Data acquisition.
    3. Implement business logic and dimensional Modelling.
    4. Build and populate data.
    5. Build Reports.
    To understand more on the ETL scenario, please find the below diagram:
     



     The reason because the ETL testing is slightly different from the normal manual testing. Here the Test Engineer should have the following skills to test the business expectations.
    1) Ability to write the SQL (Oracle/Microsoft/Mysql) queries.
    2) Understanding the SQL Joins.
    3) Understanding of the Data warehousing Concepts.
    4) Data Analyzing capability.
    5) Knowledge on the ETL Tools eg. Informatica, DataStage, SSIS etc.
    6) Knowledge of UNIX.
    7) understanding of job Scheduling tools ex: Control M.
    8) Understanding the End-End data flow through the source and the Target Systems.
    9) Ability to understand and create the test Data.
    10) The Testing Fundamentals.  
    ETL Testing assures that information is not just loaded correctly but it is exactly aggregated and properly used for the function. So it is important.
    Actually there is a wrong concept prevails among the people like if they have tested the database they’ve tested the data warehouse. But it is not correct.  
    OLTP (on-line transaction processing) testing is database testing. But OLAP (on-line analytical processing) system testing applies to the data warehouse, and more specifically, ETL testing. The difference is this: Database testing compares data from source to target tables. Data warehouse (or ETL) testing traces the accuracy of information and date present throughout the data warehouse.



































    What is the Difference Between HashTable and HashMap in Java? - Interview Question


    HashTable

    Class below implements a hash table, which maps keys to values. Any non-null object is used as a key or as a value.
    The objects used as keys must implement the hashCode method and the equals method, in order to successfully store and retrieve objects from a hashtable,

    Class Hashtable
       java.lang.Object
          java.util.Dictionary
             java.util.Hashtable

    public class Hashtable
    extends Dictionary
    implements Map, Cloneable, Serializable

    HashMap

    This is Hash table based implementation of the Map interface. This implementation provides all of the optional map operations. It also permits the null key and null values.
    (The HashMap class is almost equivalent to Hashtable, with major difference being that it is unsynchronized and also permits nulls.)
    This class makes no guarantees as to the order of the map; in particular, it does not guarantee that the order will remain constant over time.

    Class HashMap
       java.lang.Object
          java.util.AbstractMap
             java.util.HashMap

    public class HashMap
    extends AbstractMap
    implements Map, Cloneable, Serializable

    Type Parameters:

    K - the type of keys maintained by this map
    V - the type of mapped values
    Map is an interface; HashMap is a particular implementation of that interface. HashMap uses a collection of hashed key values to do its lookup.

    Similarities

    • HashMap and Hashtable both are data structures

    • Both store data in key and value form.

    • Both use hashing technique to store unique keys.


    Differences


    • Hashtable was part of the original java.util and is a concrete implementation of a Dictionary. However, Java 2 re-engineered Hashtable so that it also implements the Map interface.

    • HashMap inherits AbstractMap class whereas Hashtable inherits Dictionary class.

    • HashMap allows one null key and multiple null values, while the Hashtable doesn't allow any null key or value.

    • The hashmap cannot have the duplicate keys in it that is why there keys must only be mapped with only the single value. But the hashtable allows the duplicate keys in it.

    • The hashmap contains an iterator which is basically fail-safe but the hashtable contains an enumerator, which is not fail-safe.

    • The access to hashtable is synchronized on the table while the access to the hashmap is not synchronized.

    • HashMap is fast as compared to Hashtable which is slow.

    • HashMap is traversed by Iterator but Hashtable is traversed by Enumerator and Iterator.

    • HashMap is non synchronized. It is not-thread safe and can't be shared between many threads without proper synchronization code but Hashtable is synchronized. It is thread-safe and can be shared with many threads.
    We can make the HashMap as synchronized by calling this code
    Map m = Collections.synchronizedMap(hashMap) but Hashtable is internally synchronized and can't be unsynchronized.

    • Iterator in HashMap is fail-fast wheras Enumerator in Hashtable is not fail-fast.



    Do's and Dont's of Interfaces in Java. Is it allowed to do WebDriver driver = new WebDriver() in Selenium ?


    You would have often seen this line of code in Selenium,

    WebDriver driver = new FireFoxDriver;

    WebDriver is an Interface,and we are defining a reference variable(driver) to this interface.

    Any object that we create refering to this interface Webdriver has to be an instance of a class only i.e FireFoxDriver. This Class FireFoxDriver implements the interface WebDriver.



    How do you launch Chrome browser using Selenium WebDriver?


    To run WebDriver for any browser, we need a browser and a selenium server jar file.

    Mozilla Firefox browser is supported by-default by Selenium 2. For other browsers like IE and Chrome you need to follow some basic steps, before you are able to to launch them or automate them with WebDriver

    Software Testing - An Easy Way: How are Exceptions Handled in Selenium ?

    Software Testing - An Easy Way: How are Exceptions Handled in Selenium ?: What is an Error?   Error is a bug in application the we may not want to catch . What is an Exception? An Exception is a...

    What is the difference between Regression Testing Vs Retesting

    What is the difference between Regression Testing Vs Retesting: Regression testing is a type of software testing that verifies that software that was previously developed and tested still performs correctly after it was changed or interfaced with other software.

    Difference between method Overriding and Overloading in Java

    Difference between method Overriding and Overloading in Java: Software Testing, Selenium, QTP, Java, Big Data, Security Testing, Cloud, TDD, Waterfall, Agile, MBT, ISTQB, CSTE, Crowdsource, Java, Web Service

    How to Reverse an arrayList in Java using Collections and Recursion ?

    To reverse a List in Java e.g. ArrayList or LinkedList, you should always use
    the Collections.reverse() method. It's safe and tested and probably
    perform better than the first version of the method you write to reverse an ArrayList in Java
    In this tutorial, I'll show you how to reverse an ArrayList of String using recursion as well. 

    Built-in Functions In Python - Letter E



    enumerate(sequence, start=0)

    Return an enumerate object. sequence must be a sequence, an iterator, or some other object which supports iteration. The next() method of the iterator returned by enumerate() returns a tuple containing a count (from start which defaults to 0) and the values obtained from iterating over sequence:

    >>>
    >>> seasons = ['Spring', 'Summer', 'Fall', 'Winter']
    >>> list(enumerate(seasons))
    [(0, 'Spring'), (1, 'Summer'), (2, 'Fall'), (3, 'Winter')]
    >>> list(enumerate(seasons, start=1))
    [(1, 'Spring'), (2, 'Summer'), (3, 'Fall'), (4, 'Winter')]
    Equivalent to:

    def enumerate(sequence, start=0):
        n = start
        for elem in sequence:
            yield n, elem
            n += 1
    New in version 2.3.


    eval(expression[, globals[, locals]])

    The arguments are a Unicode or Latin-1 encoded string and optional globals and locals. If provided, globals must be a dictionary. If provided, locals can be any mapping object.


    The expression argument is parsed and evaluated as a Python expression (technically speaking, a condition list) using the globals and locals dictionaries as global and local namespace. If the globals dictionary is present and lacks ‘__builtins__’, the current globals are copied into globals before expression is parsed. This means that expression normally has full access to the standard __builtin__ module and restricted environments are propagated. If the locals dictionary is omitted it defaults to the globals dictionary. If both dictionaries are omitted, the expression is executed in the environment where eval() is called. The return value is the result of the evaluated expression. Syntax errors are reported as exceptions. Example:

    >>>
    >>> x = 1
    >>> print eval('x+1')
    2
    This function can also be used to execute arbitrary code objects (such as those created by compile()). In this case pass a code object instead of a string. If the code object has been compiled with 'exec' as the mode argument, eval()‘s return value will be None.

    Hints: dynamic execution of statements is supported by the exec statement. Execution of statements from a file is supported by the execfile() function. The globals() and locals() functions returns the current global and local dictionary, respectively, which may be useful to pass around for use by eval() or execfile().

    See ast.literal_eval() for a function that can safely evaluate strings with expressions containing only literals.


    execfile(filename[, globals[, locals]])

    This function is similar to the exec statement, but parses a file instead of a string. It is different from the import statement in that it does not use the module administration — it reads the file unconditionally and does not create a new module. [1]

    The arguments are a file name and two optional dictionaries. The file is parsed and evaluated as a sequence of Python statements (similarly to a module) using the globals and locals dictionaries as global and local namespace. If provided, locals can be any mapping object. Remember that at module level, globals and locals are the same dictionary. If two separate objects are passed as globals and locals, the code will be executed as if it were embedded in a class definition.


    If the locals dictionary is omitted it defaults to the globals dictionary. If both dictionaries are omitted, the expression is executed in the environment where execfile() is called. The return value is None.

    Note The default locals act as described for function locals() below: modifications to the default locals dictionary should not be attempted. Pass an explicit locals dictionary if you need to see effects of the code on locals after function execfile() returns. execfile() cannot be used reliably to modify a function’s locals.

    Built-in Functions In Python - Letter D



    delattr(object, name)
    The arguments are an object and a string. The string must be the name of one of the object’s attributes. The function deletes the named attribute, provided the object allows it. For example, delattr(x, 'foobar') is equivalent to del x.foobar.
    class dict(**kwarg)
    class dict(mapping, **kwarg)
    class dict(iterable, **kwarg)
    Create a new dictionary. The dict object is the dictionary class. See dict and Mapping Types — dict for documentation about this class.
    For other containers see the built-in list, set, and tuple classes, as well as the collections module.

    Difference between method Overriding and Overloading in Java


    1)
    Method overloading is used to increase the readability of the program.
    Method overriding is used to provide the specific implementation of the method that is already provided by its super class.

    Must read : Absolute path vs relative path
    2)
    Method overloading is performed within class.
    Method overriding occurs in two classes that have IS-A (inheritance) relationship.
    3)
    In case of method overloading, parameter must be different.
    In case of method overriding, parameter must be same.
    4)
    Method overloading is the example of compile time polymorphism.
    Method overriding is the example of run time polymorphism.
    5)
    In java, method overloading can't be performed by changing return type of the method only. Return type can be same or different in method overloading. But you must have to change the parameter.
    Return type must be same or covariant in method overriding.
    Java Method Overloading example

    1. class OverloadingExample{  
    2. static int add(int a,int b){return a+b;}  
    3. static int add(int a,int b,int c){return a+b+c;}  
    4. }  
    Java Method Overriding example

    1. class Animal{  
    2. void eat(){System.out.println("eating...");}  
    3. }  
    4. class Dog extends Animal{  
    5. void eat(){System.out.println("eating bread...");}  
    6. }  

    How to set ADB Path in System Variable ? : Android , Mobile automation testing

    Check the installation path and if it's installed in C:\Program Files (x86)\Android . This is the default installation location.
    So update the PATH variable with following line.
    C:\Program Files (x86)\Android\android-sdk\tools\;C:\Program Files (x86)\Android\android-sdk\platform-tools\

    Now you can start ADB server from CMD regardless of where the prompt is at.

    Android SDK ADB server in CMD screen

    How to edit a system variable

    Here's a short how-to for the newbies. What you need is the Environment Variables dialog.
    1. Click Start (Orb) menu button.
    2. Right click on Computer icon.
    3. Click on Properties. This will bring up System window in Control Panel.
    4. Click on Advanced System Settings on the left. This will bring up the System Properties window with Advanced tab selected.
    5. Click on Environment Variables button on the bottom of the dialog. This brings up the Environment Variables dialog.
    6. In the System Variables section, scroll down till you see Path.
    7. Click on Path to select it, then the Edit button. This will bring up the Edit System Variable dialog.
    8. While the Variable value field is selected, press the End key on your keyboard to go to the right end of the line, or use the arrow keys to move the marker to the end.
    9. Type in ;C:\Program Files (x86)\Android\android-sdk\tools\;C:\Program Files (x86)\Android\android-sdk\platform-tools\ and click OK.
    10. Click OK again, then OK once more to save and exit out of the dialogs.
    That's it! You can now start any Android SDK tool, e.g. ADB or Fastboot, regardless of what your current directory is in CMD. For good measure here's what the dialog looks like. This is where you edit the Path variable.

    environment variables

    List of tools used for automated cross-browser testing of Ajax websites


    Sahi (http://sahi.co.in/) can also be added to this list. Some good points about Sahi:

    1) It handles AJAX/page load delays automatically. No need for explicit wait statements in code.

    2) It can handle applications with dynamically generated ids. It has easy identification mechanisms to relate one element to another (example. click the delete button near user "Ram"). ExtJS, ZkOSS, GWT, SmartGWT etc. have been handled via Sahi.
    Sample link:
    http://books.zkoss.org/wiki/Smal...

    Why is everyone Obsessed with BIG Data ?


    What is Big Data?


    Big data is data that exceeds the processing capacity of conventional database systems. The data is too big, moves too fast, or doesn’t fit the strictures of your database architectures. To gain value from this data, you must choose an alternative way to process it.


    Alerts in Java

    JavaScript Alerts

    Webdriver has  Alerts API to handle JavaScript Alerts. Alert is an interface. 


    // Get a handle to the open alert, prompt or confirmation
    Alert alert = driver.switchTo().alert();

    How should you learn Big Data ?

    Generally speaking, NoSQL databases aren't really used for analytics (but may be a source).

    1) Think about a big data problem you want to solve.

    Traditionally, big data has been described by the "3Vs": Volume, Variety, Velocity.  What is a real analytics problem that is best solved using big data tools?  What kind of metrics do you want to capture?  The most common use cases today involve scraping large volumes of log data.  This is because log data tends to be very unstructured, can come from multiple sources, and especially for popular websites, can be huge (terabytes+ a day).  Thus having a framework for performing distributed computing tasks is essential to solve this problem.

    2) Download and setup your big data solution







    The easiest thing to do is just use a pre-built virtual machine which just about any Hadoop provider makes freely available [1], and then run it locally.  You could also use a service like Amazon Web Services as well.  Most commonly people will use the map-reduce framework and Hive for crunching large volumes of data. Since you're just looking to learn, you wont need terabytes, or even gigabytes of data to play with so getting access to a 100 node cluster won't be a priority.  Although there are certainly challenges to overcome and understand once you start to get into multi-node environments.

    3) Solve your big data problem
    Once you have your environment set up, get to coding!  There is plenty of documentation and tutorials out there to reference and learn from [2].  And really, just type questions into Google and you'll get a ton of resources.  Read up on the tools and understand how the technology can be applied to solving for your use case.  Think about the kinds of metrics you're looking to capture within your data.  Think about what kind of map-reduce programs you will need to write to capture the data you want to analyze.  Think about how you can leverage something like Hive or Pig to do a lot of the heavy number crunching.  Something that probably wont be apparent in a single node environment but is a real world problem in any distributed environment is understanding data skew and how it affects performance [3].

    4) Analytics & Visualization: The sexy side of Big Data & BI
    Now that you've solved your big data problem and have your data in a manageable format, its time to dazzle your boss with some sweet reports.  Most enterprise architectures that leverage Hadoop will still have a SQL Database for storing and reporting data out of Hadoop (you will quickly come to realize that map-reduce has a very long response time, even on small data sets).  Loading data out of Hadoop and into a SQL database is good practice for the real world but for the sake of learning the big data side of it, not necessary.  There's several (free) reporting tools out there that will connect to Hadoop/Hive directly and will work fine for learning purposes [4].  If you want to be the cool kid on the block (and super employable at large companies), I would pick up Tableau (product) [5].  You could also lend yourself into picking up some predictive modeling and machine learning skills with some of the tools that are out there [6], and maybe start calling yourself a data scientist







    • MapReduce is the Google paper that started it all (Page on googleusercontent.com). It's a paradigm for writing distributed code inspired by some elements of functional programming. You don't have to do things this way, but it neatly fits a lot of problems we try to solve in a distributed way. The Google internal implementation is called MapReduce and Hadoop is it's open-source implementation. Amazon's Hadoop instance is called Elastic MapReduce (EMR) and has plugins for multiple languages.
    • HDFS is an implementation inspired by the Google File System (GFS) to store files across a bunch of machines when it's too big for one. Hadoop consumes data in HDFS (Hadoop Distributed File System).
    • Apache Spark is an emerging platform that has more flexibility than MapReduce but more structure than a basic message passing interface. It relies on the concept of distributed data structures (what it calls RDDs) and operators. See this page for more:
      apache.org
      The Apache Software Foundation
    • Because Spark is a lower level thing that sits on top of a message passing interface, it has higher level libraries to make it more accessible to data scientists. The Machine Learning library built on top of it is called MLib and there's a distributed graph library called GraphX.
    • Pregel and it's open source twin Giraph is a way to do graph algorithms on billions of nodes and trillions of edges over a cluster of machines. Notably, the MapReduce model is not well suited to graph processing so Hadoop/MapReduce are avoided in this model, but HDFS/GFS is still used as a data store.
    • Zookeeper is a coordination and synchronization service that a distributed set of computer make decisions by consensus, handles failure, etc.
    • Flume and Scribe are logging services, Flume is an Apache project and Scribe is an open-source Facebook project. Both aim to make it easy to collect tons of logged data, analyze it, tail it, move it around and store it to a distributed store.
    • Google BigTable and it's open source twin HBase were meant to be read-write distributed databases, originally built for the Google Crawler that sit on top of GFS/HDFS and MapReduce/Hadoop. Google Research Publication: BigTable
    • Hive and Pig are abstractions on top of Hadoop designed to help analysis of tabular data stored in a distributed file system (think of excel sheets too big to store on one machine). They operate on top of a data warehouse, so the high level idea is to dump data once and analyze it by reading and processing it instead of updating cells and rows and columns individually much. Hive has a language similar to SQL while Pig is inspired by Google's Sawzall - Google Research Publication: Sawzall. You generally don't update a single cell in a table when processing it with Hive or Pig.
    • Hive and Pig turned out to be slow because they were built on Hadoop which optimizes for the volume of data moved around, not latency. To get around this, engineers bypassed and went straight to HDFS. They also threw in some memory and caching and this resulted in Google's Dremel (Dremel: Interactive Analysis of Web-Scale Datasets), F1 (F1 - The Fault-Tolerant Distributed RDBMS Supporting Google's Ad Business), Facebook's Presto (Presto | Distributed SQL Query Engine for Big Data), Apache Spark SQL (Page on apache.org ), Cloudera Impala (Cloudera Impala: Real-Time Queries in Apache Hadoop, For Real), Amazon's Redshift, etc. They all have slightly different semantics but are essentially meant to be programmer or analyst friendly abstractions to analyze tabular data stored in distributed data warehouses.
    • Mahout (Scalable machine learning and data mining) is a collection of machine learning libraries written in the MapReduce paradigm, specifically for Hadoop. Google has it's own internal version but they haven't published a paper on it as far as I know.
    • Oozie is a workflow scheduler. The oversimplified description would be that it's something that puts together a pipeline of the tools described above. For example, you can write an Oozie script that will scrape your production HBase data to a Hive warehouse nightly, then a Mahout script will train with this data. At the same time, you might use pig to pull in the test set into another file and when Mahout is done creating a model you can pass the testing data through the model and get results. You specify the dependency graph of these tasks through Oozie (I may be messing up terminology since I've never used Oozie but have used the Facebook equivalent).
    • Lucene is a bunch of search-related and NLP tools but it's core feature is being a search index and retrieval system. It takes data from a store like HBase and indexes it for fast retrieval from a search query. Solr uses Lucene under the hood to provide a convenient REST API for indexing and searching data. ElasticSearch is similar to Solr.
    • Sqoop is a command-line interface to back SQL data to a distributed warehouse. It's what you might use to snapshot and copy your database tables to a Hive warehouse every night.
    • Hue is a web-based GUI to a subset of the above tools - http://gethue.com/
    •  
    • Bigdata is like combination of bunch of subjects. Mainly require programming, analysis, nlp, MLP, mathematics.

      Here are bunch of courses I came accross:

          Introduction to CS Course
          Notes: Introduction to Computer Science Course that provides instructions on coding.
          Online Resources:
          Udacity - intro to CS course,
          Coursera - Computer Science 101

          Code in at least one object oriented programming language: C++, Java, or Python
          Beginner Online Resources:
          Coursera - Learn to Program: The Fundamentals,
          MIT Intro to Programming in Java,
          Google's Python Class,
          Coursera - Introduction to Python,
          Python Open Source E-Book

          Intermediate Online Resources:
          Udacity's Design of Computer Programs,
          Coursera - Learn to Program: Crafting Quality Code,
          Coursera - Programming Languages,
          Brown University - Introduction to Programming Languages

          Learn other Programming Languages
          Notes: Add to your repertoire - Java Script, CSS, HTML, Ruby, PHP, C, Perl, Shell. Lisp, Scheme.
          Online Resources: w3school.com - HTML Tutorial, Learn to code

          Test Your Code
          Notes: Learn how to catch bugs, create tests, and break your software
          Online Resources: Udacity - Software Testing Methods, Udacity - Software Debugging

          Develop logical reasoning and knowledge of discrete math
          Online Resources:
          MIT Mathematics for Computer Science,
          Coursera - Introduction to Logic,
          Coursera - Linear and Discrete Optimization,
          Coursera - Probabilistic Graphical Models,
          Coursera - Game Theory.

          Develop strong understanding of Algorithms and Data Structures
          Notes: Learn about fundamental data types (stack, queues, and bags), sorting algorithms (quicksort, mergesort, heapsort), and data structures (binary search trees, red-black trees, hash tables), Big O.
          Online Resources:
          MIT Introduction to Algorithms,
          Coursera - Introduction to Algorithms Part 1 & Part 2,
          Wikipedia - List of Algorithms,
          Wikipedia - List of Data Structures,
          Book: The Algorithm Design Manual

          Develop a strong knowledge of operating systems
          Online Resources: UC Berkeley Computer Science 162

          Learn Artificial Intelligence Online Resources:
          Stanford University - Introduction to Robotics, Natural Language Processing, Machine Learning

          Learn how to build compilers
          Online Resources: Coursera - Compilers

          Learn cryptography
          Online Resources: Coursera - Cryptography, Udacity - Applied Cryptography

          Learn Parallel Programming
          Online Resources: Coursera - Heterogeneous Parallel Programming


      Tools and technologies for Bigdata:

      Apache spark - Apache Spark is an open-source data analytics cluster computing framework originally developed in the AMPLab at UC Berkeley.[1] Spark fits into the Hadoop open-source community, building on top of the Hadoop Distributed File System (HDFS).[2] However, Spark is not tied to the two-stage MapReduce paradigm, and promises performance up to 100 times faster than Hadoop MapReduce for certain applications.

      Database pipelining -  


    Corona :

    Corona, a new scheduling framework that separates cluster resource management from job coordination.[1] Corona introduces a cluster managerwhose only purpose is to track the nodes in the cluster and the amount of free resources. A dedicated job tracker is created for each job, and can run either in the same process as the client (for small jobs) or as a separate process in the cluster (for large jobs).


    One major difference from our previous Hadoop MapReduce implementation is that Corona uses push-based, rather than pull-based, scheduling. After the cluster manager receives resource requests from the job tracker, it pushes the resource grants back to the job tracker. Also, once the job tracker gets resource grants, it creates tasks and then pushes these tasks to the task trackers for running. There is no periodic heartbeat involved in this scheduling, so the scheduling latency is minimized