What is the Difference Between HashTable and HashMap in Java? - Interview Question


HashTable

Class below implements a hash table, which maps keys to values. Any non-null object is used as a key or as a value.
The objects used as keys must implement the hashCode method and the equals method, in order to successfully store and retrieve objects from a hashtable,

Class Hashtable
   java.lang.Object
      java.util.Dictionary
         java.util.Hashtable

public class Hashtable
extends Dictionary
implements Map, Cloneable, Serializable

HashMap

This is Hash table based implementation of the Map interface. This implementation provides all of the optional map operations. It also permits the null key and null values.
(The HashMap class is almost equivalent to Hashtable, with major difference being that it is unsynchronized and also permits nulls.)
This class makes no guarantees as to the order of the map; in particular, it does not guarantee that the order will remain constant over time.

Class HashMap
   java.lang.Object
      java.util.AbstractMap
         java.util.HashMap

public class HashMap
extends AbstractMap
implements Map, Cloneable, Serializable

Type Parameters:

K - the type of keys maintained by this map
V - the type of mapped values
Map is an interface; HashMap is a particular implementation of that interface. HashMap uses a collection of hashed key values to do its lookup.

Similarities

• HashMap and Hashtable both are data structures

• Both store data in key and value form.

• Both use hashing technique to store unique keys.


Differences


• Hashtable was part of the original java.util and is a concrete implementation of a Dictionary. However, Java 2 re-engineered Hashtable so that it also implements the Map interface.

• HashMap inherits AbstractMap class whereas Hashtable inherits Dictionary class.

• HashMap allows one null key and multiple null values, while the Hashtable doesn't allow any null key or value.

• The hashmap cannot have the duplicate keys in it that is why there keys must only be mapped with only the single value. But the hashtable allows the duplicate keys in it.

• The hashmap contains an iterator which is basically fail-safe but the hashtable contains an enumerator, which is not fail-safe.

• The access to hashtable is synchronized on the table while the access to the hashmap is not synchronized.

• HashMap is fast as compared to Hashtable which is slow.

• HashMap is traversed by Iterator but Hashtable is traversed by Enumerator and Iterator.

• HashMap is non synchronized. It is not-thread safe and can't be shared between many threads without proper synchronization code but Hashtable is synchronized. It is thread-safe and can be shared with many threads.
We can make the HashMap as synchronized by calling this code
Map m = Collections.synchronizedMap(hashMap) but Hashtable is internally synchronized and can't be unsynchronized.

• Iterator in HashMap is fail-fast wheras Enumerator in Hashtable is not fail-fast.



Do's and Dont's of Interfaces in Java. Is it allowed to do WebDriver driver = new WebDriver() in Selenium ?


You would have often seen this line of code in Selenium,

WebDriver driver = new FireFoxDriver;

WebDriver is an Interface,and we are defining a reference variable(driver) to this interface.

Any object that we create refering to this interface Webdriver has to be an instance of a class only i.e FireFoxDriver. This Class FireFoxDriver implements the interface WebDriver.



How do you launch Chrome browser using Selenium WebDriver?


To run WebDriver for any browser, we need a browser and a selenium server jar file.

Mozilla Firefox browser is supported by-default by Selenium 2. For other browsers like IE and Chrome you need to follow some basic steps, before you are able to to launch them or automate them with WebDriver

Software Testing - An Easy Way: How are Exceptions Handled in Selenium ?

Software Testing - An Easy Way: How are Exceptions Handled in Selenium ?: What is an Error?   Error is a bug in application the we may not want to catch . What is an Exception? An Exception is a...

What is the difference between Regression Testing Vs Retesting

What is the difference between Regression Testing Vs Retesting: Regression testing is a type of software testing that verifies that software that was previously developed and tested still performs correctly after it was changed or interfaced with other software.

Difference between method Overriding and Overloading in Java

Difference between method Overriding and Overloading in Java: Software Testing, Selenium, QTP, Java, Big Data, Security Testing, Cloud, TDD, Waterfall, Agile, MBT, ISTQB, CSTE, Crowdsource, Java, Web Service

How to Reverse an arrayList in Java using Collections and Recursion ?

To reverse a List in Java e.g. ArrayList or LinkedList, you should always use the Collections.reverse() method. It's safe and tested and probably perform better than the first version of the method you write to reverse an ArrayList in Java
In this tutorial, I'll show you how to reverse an ArrayList of String using recursion as well. 

Built-in Functions In Python - Letter E



enumerate(sequence, start=0)

Return an enumerate object. sequence must be a sequence, an iterator, or some other object which supports iteration. The next() method of the iterator returned by enumerate() returns a tuple containing a count (from start which defaults to 0) and the values obtained from iterating over sequence:

>>>
>>> seasons = ['Spring', 'Summer', 'Fall', 'Winter']
>>> list(enumerate(seasons))
[(0, 'Spring'), (1, 'Summer'), (2, 'Fall'), (3, 'Winter')]
>>> list(enumerate(seasons, start=1))
[(1, 'Spring'), (2, 'Summer'), (3, 'Fall'), (4, 'Winter')]
Equivalent to:

def enumerate(sequence, start=0):
    n = start
    for elem in sequence:
        yield n, elem
        n += 1
New in version 2.3.


eval(expression[, globals[, locals]])

The arguments are a Unicode or Latin-1 encoded string and optional globals and locals. If provided, globals must be a dictionary. If provided, locals can be any mapping object.


The expression argument is parsed and evaluated as a Python expression (technically speaking, a condition list) using the globals and locals dictionaries as global and local namespace. If the globals dictionary is present and lacks ‘__builtins__’, the current globals are copied into globals before expression is parsed. This means that expression normally has full access to the standard __builtin__ module and restricted environments are propagated. If the locals dictionary is omitted it defaults to the globals dictionary. If both dictionaries are omitted, the expression is executed in the environment where eval() is called. The return value is the result of the evaluated expression. Syntax errors are reported as exceptions. Example:

>>>
>>> x = 1
>>> print eval('x+1')
2
This function can also be used to execute arbitrary code objects (such as those created by compile()). In this case pass a code object instead of a string. If the code object has been compiled with 'exec' as the mode argument, eval()‘s return value will be None.

Hints: dynamic execution of statements is supported by the exec statement. Execution of statements from a file is supported by the execfile() function. The globals() and locals() functions returns the current global and local dictionary, respectively, which may be useful to pass around for use by eval() or execfile().

See ast.literal_eval() for a function that can safely evaluate strings with expressions containing only literals.


execfile(filename[, globals[, locals]])

This function is similar to the exec statement, but parses a file instead of a string. It is different from the import statement in that it does not use the module administration — it reads the file unconditionally and does not create a new module. [1]

The arguments are a file name and two optional dictionaries. The file is parsed and evaluated as a sequence of Python statements (similarly to a module) using the globals and locals dictionaries as global and local namespace. If provided, locals can be any mapping object. Remember that at module level, globals and locals are the same dictionary. If two separate objects are passed as globals and locals, the code will be executed as if it were embedded in a class definition.


If the locals dictionary is omitted it defaults to the globals dictionary. If both dictionaries are omitted, the expression is executed in the environment where execfile() is called. The return value is None.

Note The default locals act as described for function locals() below: modifications to the default locals dictionary should not be attempted. Pass an explicit locals dictionary if you need to see effects of the code on locals after function execfile() returns. execfile() cannot be used reliably to modify a function’s locals.

Built-in Functions In Python - Letter D



delattr(object, name)
The arguments are an object and a string. The string must be the name of one of the object’s attributes. The function deletes the named attribute, provided the object allows it. For example, delattr(x, 'foobar') is equivalent to del x.foobar.
class dict(**kwarg)
class dict(mapping, **kwarg)
class dict(iterable, **kwarg)
Create a new dictionary. The dict object is the dictionary class. See dict and Mapping Types — dict for documentation about this class.
For other containers see the built-in list, set, and tuple classes, as well as the collections module.

Difference between method Overriding and Overloading in Java


1)
Method overloading is used to increase the readability of the program.
Method overriding is used to provide the specific implementation of the method that is already provided by its super class.
2)
Method overloading is performed within class.
Method overriding occurs in two classes that have IS-A (inheritance) relationship.
3)
In case of method overloading, parameter must be different.
In case of method overriding, parameter must be same.
4)
Method overloading is the example of compile time polymorphism.
Method overriding is the example of run time polymorphism.
5)
In java, method overloading can't be performed by changing return type of the method only. Return type can be same or different in method overloading. But you must have to change the parameter.
Return type must be same or covariant in method overriding.
Java Method Overloading example

  1. class OverloadingExample{  
  2. static int add(int a,int b){return a+b;}  
  3. static int add(int a,int b,int c){return a+b+c;}  
  4. }  
Java Method Overriding example

  1. class Animal{  
  2. void eat(){System.out.println("eating...");}  
  3. }  
  4. class Dog extends Animal{  
  5. void eat(){System.out.println("eating bread...");}  
  6. }  

How to set ADB Path in System Variable ? : Android , Mobile automation testing

Check the installation path and if it's installed in C:\Program Files (x86)\Android . This is the default installation location.
So update the PATH variable with following line.
C:\Program Files (x86)\Android\android-sdk\tools\;C:\Program Files (x86)\Android\android-sdk\platform-tools\

Now you can start ADB server from CMD regardless of where the prompt is at.

Android SDK ADB server in CMD screen

How to edit a system variable

Here's a short how-to for the newbies. What you need is the Environment Variables dialog.
  1. Click Start (Orb) menu button.
  2. Right click on Computer icon.
  3. Click on Properties. This will bring up System window in Control Panel.
  4. Click on Advanced System Settings on the left. This will bring up the System Properties window with Advanced tab selected.
  5. Click on Environment Variables button on the bottom of the dialog. This brings up the Environment Variables dialog.
  6. In the System Variables section, scroll down till you see Path.
  7. Click on Path to select it, then the Edit button. This will bring up the Edit System Variable dialog.
  8. While the Variable value field is selected, press the End key on your keyboard to go to the right end of the line, or use the arrow keys to move the marker to the end.
  9. Type in ;C:\Program Files (x86)\Android\android-sdk\tools\;C:\Program Files (x86)\Android\android-sdk\platform-tools\ and click OK.
  10. Click OK again, then OK once more to save and exit out of the dialogs.
That's it! You can now start any Android SDK tool, e.g. ADB or Fastboot, regardless of what your current directory is in CMD. For good measure here's what the dialog looks like. This is where you edit the Path variable.

environment variables

List of tools used for automated cross-browser testing of Ajax websites


Sahi (http://sahi.co.in/) can also be added to this list. Some good points about Sahi:

1) It handles AJAX/page load delays automatically. No need for explicit wait statements in code.

2) It can handle applications with dynamically generated ids. It has easy identification mechanisms to relate one element to another (example. click the delete button near user "Ram"). ExtJS, ZkOSS, GWT, SmartGWT etc. have been handled via Sahi.
Sample link:
http://books.zkoss.org/wiki/Smal...

Why is everyone Obsessed with BIG Data ?


What is Big Data?


Big data is data that exceeds the processing capacity of conventional database systems. The data is too big, moves too fast, or doesn’t fit the strictures of your database architectures. To gain value from this data, you must choose an alternative way to process it.


Alerts in Java

JavaScript Alerts

Webdriver has  Alerts API to handle JavaScript Alerts. Alert is an interface. 


// Get a handle to the open alert, prompt or confirmation
Alert alert = driver.switchTo().alert();

How should you learn Big Data ?

Generally speaking, NoSQL databases aren't really used for analytics (but may be a source).

1) Think about a big data problem you want to solve.

Traditionally, big data has been described by the "3Vs": Volume, Variety, Velocity.  What is a real analytics problem that is best solved using big data tools?  What kind of metrics do you want to capture?  The most common use cases today involve scraping large volumes of log data.  This is because log data tends to be very unstructured, can come from multiple sources, and especially for popular websites, can be huge (terabytes+ a day).  Thus having a framework for performing distributed computing tasks is essential to solve this problem.

2) Download and setup your big data solution







The easiest thing to do is just use a pre-built virtual machine which just about any Hadoop provider makes freely available [1], and then run it locally.  You could also use a service like Amazon Web Services as well.  Most commonly people will use the map-reduce framework and Hive for crunching large volumes of data. Since you're just looking to learn, you wont need terabytes, or even gigabytes of data to play with so getting access to a 100 node cluster won't be a priority.  Although there are certainly challenges to overcome and understand once you start to get into multi-node environments.

3) Solve your big data problem
Once you have your environment set up, get to coding!  There is plenty of documentation and tutorials out there to reference and learn from [2].  And really, just type questions into Google and you'll get a ton of resources.  Read up on the tools and understand how the technology can be applied to solving for your use case.  Think about the kinds of metrics you're looking to capture within your data.  Think about what kind of map-reduce programs you will need to write to capture the data you want to analyze.  Think about how you can leverage something like Hive or Pig to do a lot of the heavy number crunching.  Something that probably wont be apparent in a single node environment but is a real world problem in any distributed environment is understanding data skew and how it affects performance [3].

4) Analytics & Visualization: The sexy side of Big Data & BI
Now that you've solved your big data problem and have your data in a manageable format, its time to dazzle your boss with some sweet reports.  Most enterprise architectures that leverage Hadoop will still have a SQL Database for storing and reporting data out of Hadoop (you will quickly come to realize that map-reduce has a very long response time, even on small data sets).  Loading data out of Hadoop and into a SQL database is good practice for the real world but for the sake of learning the big data side of it, not necessary.  There's several (free) reporting tools out there that will connect to Hadoop/Hive directly and will work fine for learning purposes [4].  If you want to be the cool kid on the block (and super employable at large companies), I would pick up Tableau (product) [5].  You could also lend yourself into picking up some predictive modeling and machine learning skills with some of the tools that are out there [6], and maybe start calling yourself a data scientist







  • MapReduce is the Google paper that started it all (Page on googleusercontent.com). It's a paradigm for writing distributed code inspired by some elements of functional programming. You don't have to do things this way, but it neatly fits a lot of problems we try to solve in a distributed way. The Google internal implementation is called MapReduce and Hadoop is it's open-source implementation. Amazon's Hadoop instance is called Elastic MapReduce (EMR) and has plugins for multiple languages.
  • HDFS is an implementation inspired by the Google File System (GFS) to store files across a bunch of machines when it's too big for one. Hadoop consumes data in HDFS (Hadoop Distributed File System).
  • Apache Spark is an emerging platform that has more flexibility than MapReduce but more structure than a basic message passing interface. It relies on the concept of distributed data structures (what it calls RDDs) and operators. See this page for more:
    apache.org
    The Apache Software Foundation
  • Because Spark is a lower level thing that sits on top of a message passing interface, it has higher level libraries to make it more accessible to data scientists. The Machine Learning library built on top of it is called MLib and there's a distributed graph library called GraphX.
  • Pregel and it's open source twin Giraph is a way to do graph algorithms on billions of nodes and trillions of edges over a cluster of machines. Notably, the MapReduce model is not well suited to graph processing so Hadoop/MapReduce are avoided in this model, but HDFS/GFS is still used as a data store.
  • Zookeeper is a coordination and synchronization service that a distributed set of computer make decisions by consensus, handles failure, etc.
  • Flume and Scribe are logging services, Flume is an Apache project and Scribe is an open-source Facebook project. Both aim to make it easy to collect tons of logged data, analyze it, tail it, move it around and store it to a distributed store.
  • Google BigTable and it's open source twin HBase were meant to be read-write distributed databases, originally built for the Google Crawler that sit on top of GFS/HDFS and MapReduce/Hadoop. Google Research Publication: BigTable
  • Hive and Pig are abstractions on top of Hadoop designed to help analysis of tabular data stored in a distributed file system (think of excel sheets too big to store on one machine). They operate on top of a data warehouse, so the high level idea is to dump data once and analyze it by reading and processing it instead of updating cells and rows and columns individually much. Hive has a language similar to SQL while Pig is inspired by Google's Sawzall - Google Research Publication: Sawzall. You generally don't update a single cell in a table when processing it with Hive or Pig.
  • Hive and Pig turned out to be slow because they were built on Hadoop which optimizes for the volume of data moved around, not latency. To get around this, engineers bypassed and went straight to HDFS. They also threw in some memory and caching and this resulted in Google's Dremel (Dremel: Interactive Analysis of Web-Scale Datasets), F1 (F1 - The Fault-Tolerant Distributed RDBMS Supporting Google's Ad Business), Facebook's Presto (Presto | Distributed SQL Query Engine for Big Data), Apache Spark SQL (Page on apache.org ), Cloudera Impala (Cloudera Impala: Real-Time Queries in Apache Hadoop, For Real), Amazon's Redshift, etc. They all have slightly different semantics but are essentially meant to be programmer or analyst friendly abstractions to analyze tabular data stored in distributed data warehouses.
  • Mahout (Scalable machine learning and data mining) is a collection of machine learning libraries written in the MapReduce paradigm, specifically for Hadoop. Google has it's own internal version but they haven't published a paper on it as far as I know.
  • Oozie is a workflow scheduler. The oversimplified description would be that it's something that puts together a pipeline of the tools described above. For example, you can write an Oozie script that will scrape your production HBase data to a Hive warehouse nightly, then a Mahout script will train with this data. At the same time, you might use pig to pull in the test set into another file and when Mahout is done creating a model you can pass the testing data through the model and get results. You specify the dependency graph of these tasks through Oozie (I may be messing up terminology since I've never used Oozie but have used the Facebook equivalent).
  • Lucene is a bunch of search-related and NLP tools but it's core feature is being a search index and retrieval system. It takes data from a store like HBase and indexes it for fast retrieval from a search query. Solr uses Lucene under the hood to provide a convenient REST API for indexing and searching data. ElasticSearch is similar to Solr.
  • Sqoop is a command-line interface to back SQL data to a distributed warehouse. It's what you might use to snapshot and copy your database tables to a Hive warehouse every night.
  • Hue is a web-based GUI to a subset of the above tools - http://gethue.com/
  •  
  • Bigdata is like combination of bunch of subjects. Mainly require programming, analysis, nlp, MLP, mathematics.

    Here are bunch of courses I came accross:

        Introduction to CS Course
        Notes: Introduction to Computer Science Course that provides instructions on coding.
        Online Resources:
        Udacity - intro to CS course,
        Coursera - Computer Science 101

        Code in at least one object oriented programming language: C++, Java, or Python
        Beginner Online Resources:
        Coursera - Learn to Program: The Fundamentals,
        MIT Intro to Programming in Java,
        Google's Python Class,
        Coursera - Introduction to Python,
        Python Open Source E-Book

        Intermediate Online Resources:
        Udacity's Design of Computer Programs,
        Coursera - Learn to Program: Crafting Quality Code,
        Coursera - Programming Languages,
        Brown University - Introduction to Programming Languages

        Learn other Programming Languages
        Notes: Add to your repertoire - Java Script, CSS, HTML, Ruby, PHP, C, Perl, Shell. Lisp, Scheme.
        Online Resources: w3school.com - HTML Tutorial, Learn to code

        Test Your Code
        Notes: Learn how to catch bugs, create tests, and break your software
        Online Resources: Udacity - Software Testing Methods, Udacity - Software Debugging

        Develop logical reasoning and knowledge of discrete math
        Online Resources:
        MIT Mathematics for Computer Science,
        Coursera - Introduction to Logic,
        Coursera - Linear and Discrete Optimization,
        Coursera - Probabilistic Graphical Models,
        Coursera - Game Theory.

        Develop strong understanding of Algorithms and Data Structures
        Notes: Learn about fundamental data types (stack, queues, and bags), sorting algorithms (quicksort, mergesort, heapsort), and data structures (binary search trees, red-black trees, hash tables), Big O.
        Online Resources:
        MIT Introduction to Algorithms,
        Coursera - Introduction to Algorithms Part 1 & Part 2,
        Wikipedia - List of Algorithms,
        Wikipedia - List of Data Structures,
        Book: The Algorithm Design Manual

        Develop a strong knowledge of operating systems
        Online Resources: UC Berkeley Computer Science 162

        Learn Artificial Intelligence Online Resources:
        Stanford University - Introduction to Robotics, Natural Language Processing, Machine Learning

        Learn how to build compilers
        Online Resources: Coursera - Compilers

        Learn cryptography
        Online Resources: Coursera - Cryptography, Udacity - Applied Cryptography

        Learn Parallel Programming
        Online Resources: Coursera - Heterogeneous Parallel Programming


    Tools and technologies for Bigdata:

    Apache spark - Apache Spark is an open-source data analytics cluster computing framework originally developed in the AMPLab at UC Berkeley.[1] Spark fits into the Hadoop open-source community, building on top of the Hadoop Distributed File System (HDFS).[2] However, Spark is not tied to the two-stage MapReduce paradigm, and promises performance up to 100 times faster than Hadoop MapReduce for certain applications.

    Database pipelining -  


Corona :

Corona, a new scheduling framework that separates cluster resource management from job coordination.[1] Corona introduces a cluster managerwhose only purpose is to track the nodes in the cluster and the amount of free resources. A dedicated job tracker is created for each job, and can run either in the same process as the client (for small jobs) or as a separate process in the cluster (for large jobs).


One major difference from our previous Hadoop MapReduce implementation is that Corona uses push-based, rather than pull-based, scheduling. After the cluster manager receives resource requests from the job tracker, it pushes the resource grants back to the job tracker. Also, once the job tracker gets resource grants, it creates tasks and then pushes these tasks to the task trackers for running. There is no periodic heartbeat involved in this scheduling, so the scheduling latency is minimized


UI Automator Viewer : Locate Elements in Mobile apk testing

Find android application element details using UI Automator Viewer



Friends, Let's see how to find element details of android application using Android “UI Automator Viewer”. Following are the steps:
1. Open android application on emulator or real device. Like I have opened below app on emulator.



2. Go to “tools” folder in your installed Android SDK folder.
3. Click on uiautomatorviewer.bat file you screen should be open below screen.



4. Click on Icon as marked on above screen, your opened application display in UI Automator Viewer like below screen. Move your mouse on element at left panel, you should see element details will show on right panel.

Appium : Why and How ?



Appium is mobile application software testing tool which is currently trending in mobile automation testing industry.

Appium server is the server which is used to support the mobile devices including ANDROID and IOS. Appium is a server written in Node.js. It can be built and installed from source or directly from NPM.

Appium is to automate any mobile app from any language and any test framework, with full access to back-end APIs and DBs from test code. Write tests with your favorite tools using all the above programming languages.

It will be very easy to learn appium software testing tool for you as you already knows selenium webdriver.

Requirements:


iOS

Mac OSX 10.7+
XCode 4.5+ w/ Command Line Tools

Android

Mac OSX 10.7+ or Windows 7+ or Linux Android SDK ≥ 16 (SDK < 16 in Selendroid mode)



Appium Setup Environment for Windows:-

Pre-requirement:-


Download Java/JDK (Setup Environment)
Download Appium Server
TestNG
Download Android Studio (Setup Environment).
                                                |
                                                |==>  Node.js
                                                |
                                                ===> UIAutomatorViewer  Download Eclipse (How to Create Emulator)


Download JAVA and Setup Environment:

Click here to Download JAVA

After Download from this Link, Setup environment on you System.

Now the Question is :- How to Setup environment on you System?

Windows :

  1. From the desktop, right click the Computer icon.
  2. Choose Properties from the context menu.
  3. Click the Advanced system settings link.
  4. Click Environment Variables. In the section System Variables, find the PATH environment variable and select it. Click Edit. If the PATH environment variable does not exist, click New.
  5. In the Edit System Variable (or New System Variable) window, specify the value of the PATH environment variable. Click OK. Close all remaining windows by clicking OK.


Note: You may see a PATH environment variable similar to the following when editing it from the Control Panel:
%JAVA_HOME%\bin;%SystemRoot%\system32;%SystemRoot%;%SystemRoot%\System32\Wbem
Variables enclosed in percentage signs (%) are existing environment variables. If one of these variables is listed in the Environment Variables window from the Control Panel (such as JAVA_HOME), then you can edit its value. If it does not appear, then it is a special environment variable that the operating system has defined. For example, SystemRoot is the location of the Microsoft Windows system folder. To obtain the value of a environment variable, enter the following at a command prompt. (This example obtains the value of the SystemRoot environment variable)  :echo %SystemRoot%


if you like this message, you can donate some amount to me - See more at: http://www.blogs.digitalworlds.net/softwarenotes/?p=54#sthash.hft2jDbB.dpuf

Download JDK and Setup Environment:

Click here to Download JDK

1. Under "Java Platform, Standard Edition ==> "Java SE 8ux ==> Click the "JDK Download "
2. Check "Accept License Agreement"
3. Choose your OS, e.g. Windows x86 for 32-bit Windows OS or Windows x64 for 64-bit Windows OS. You can check whether your windows is 32- bit or 64 bit via "Control Panel"==> System ==>Under the "System Type"



To edit the PATH environment variable in Windows XP/Vista/7/8:




  1. Control Panel ⇒ System ⇒ Advanced system settings
  2. Switch to "Advanced" tab ⇒ Environment Variables
  3. In "System Variables", scroll down to select "PATH" ⇒ Edit




Download APPIUM Server:

 Click here for Download latest Appium Server





Download Android Studio and Setup Environment: 

Click here to Download Android Studio SDK

  1. Install Android SDK in your system.
  2. Set ANDROID_HOME environment variable which points to your SDK directory’s \sdk\ folder.
  3. Append ‘%ANDROID_HOME%\platform-tools’ value to your PATH environment variable.
  4. Start your Android emulator or connect your Android device to your system 
  5. Open Command Prompt and navigate to your Android SDK’s \platform-tools\ directory (Eg. D:\adt-bundle-windows-x86_64-20130514\sdk\platform-tools).

(When you download Android Studio, Node.js  and uiautomatorvieweron will also download)

Node.js :- In node.js Appium Server is written.

goto Andoid sdk folder > Node.js is placed



Using node.js:


  • Go to the Manual download page
  • Download the Windows Installer (.msi)
  • To run the installer, click Run.
  • After installing the node.js open the cmd prompt and use following commands
  • --> npm install -g appium # to install the appium
    --> npm install wd # get appium client
UIAutomatorViewer :- This uiautomator helps you to find out the Mobile Element with id, classname and xpath locators.


goto Android sdk folder>Tools>uiautomatorvieweron is placed on the bottom of the folder.


 

Would you like to see you post published here ?

Want reach out to millions of readers out there? Send in your posts to
softwaretestinganeasyway@gmail.com
We will help you reach out to the vast community of testers and let the world notice you.