Top notch app testing is crucial. However, it is difficult since device fragmentation is a major issue with thousands of Android devices on the market. Even iOS is not entirely immune from this issue. Between all of the different “iDevices” currently supported, there are over 30 different iOS device models on the market today. Throw in multiple languages for different countries and you can see how the test matrix is daunting to say the least.
Taking a multi-pronged approach to testing helps to make the overall process more manageable, thorough, and successful. Take a look at three key steps to addressing the challenge of delivering high quality in a mobile world, as well as a new Pivotal Cloud Foundry microservice to help streamline the process of distributing test builds.
Test Driven Development
Testing early and often is critical for creating high quality apps. This helps catch bugs early in the development process, fixing minor issues before they turn into expensive problems and require tedious debugging later. A great way to achieve this is with Test Driven Development (TDD), one of the key concepts within the framework of agile development methodologies.
TDD is a reversal of the traditional approach to software development in which you write code, then write the automated tests for that code. TDD instead involves initially writing an automated test for a given feature, then coding the feature itself and ensuring that it passes the test.
This methodology has been shown to not only produce higher quality software but increase programmer productivity as well. It ensures that all code is covered by at least one test, which raises the baseline of quality. Beyond that, it encourages developers to focus on actual use cases throughout the development process and can reduce extraneous code. The net result is better quality software, delivered faster.
With the myriad devices on the market today, you’ll want to automate as much as possible. This involves executing test scripts on devices, and it is particularly useful for regression testing and smoke tests (simple tests to ensure the app functions at a very basic level) because these are generally repetitive and time consuming to do manually.
There are many tools available for automated testing and several of them require the use of rooted or jail broken devices. We recommend avoiding those since rooting phones voids warranties, opens security holes, and even changes the behavior of the device being tested. Don’t waste time testing on devices that are not set up for their legally intended use.
Automated testing saves significant time and money by allowing you to run tests on many devices simultaneously and quickly. Since machines never miss a step, leveraging automation reduces risk of human error throughout the testing process.
User Testing and Automated Distribution
Despite the advantages of automation, it doesn’t address everything. User testing is still required in order to cover the unexpected aspects of human behavior that happen in real life. It is also critical to obtain valuable insights about how people actually use your app.
Once you’ve completed the automated tests, it’s time to put your app in the hands of real people, starting with the QA team and continuing with test user groups. You’ll want to stick to this “order of operations” in order to catch and fix the most basic issues before bringing your app in front of your most valuable testing audience.
Getting a pre-release mobile app in the hands (or pockets, as it were) of users can be a painful process. It often involves emailing a file, dragging/dropping it into a desktop application, and syncing it via cable to the device. This can especially be a nuisance for typical non-tech savvy users who don’t do it every day.
The challenges with user testing process don’t stop there. Keeping tabs on test user groups for different apps and making sure everyone has the correct versions of the correct apps is another burden for mobile development and QA teams. Using a platform to manage the distribution and version management of test builds will streamline this process and reduce frustration. Ultimately, an effective process is necessary to scale your user testing practice for the multiple apps and frequent app updates that define a best-in-class mobile testing strategy.
The new App Distribution for Pivotal Cloud Foundry simplifies this process by providing an easy, intuitive way for users to do over-the-air (OTA) installs of pre-release apps with the tap of a button. It handles device registration, user and group management, distribution of apps, notification of new available app versions and more. It supports all of the major mobile platforms and runs on Pivotal Cloud Foundry, so enterprises can deploy it in a private cloud, on premises for full control and privacy. This makes life a lot easier for users testing the apps, as well as the mobile development and QA teams releasing apps. Also check out Useful Browser Extensions and Web Services for QA Testing.
Following these three steps diligently will lead to higher quality apps, better user engagement, and stronger app ratings. Moreover, it will help your organization achieve overall success in your mobile app efforts in a world where mobile is increasingly the primary way customers, employees, and business partners work and interact.
Appium is a free open-source test automation framework for mobile testing. It is a wrapper that translates Selenium commands into iOS and Android commands.
You can write your tests against iOS and Android platforms using the same API, enabling code reuse between test suites. But you still need separate iOS and Android scripts, because the UI elements are different on the two platforms.
- Appium is mobile web, native and hybrid software application test automation tool.
- It is open-source software automation tool which is useful to automate android and IOS platform apps.
- Most important thing is : Appium is “cross-platform” automation tool so you can write software automation tests against iOS and Android (multiple platforms) using same API.
- “cross-platform” enables you large amount of code reuse between iOS and Android test suites.
- Appium support software test automation on emulator or simulator and physical mobile devices too.
- Appium is developed on few key philosophy points : 1. Should not have to recompile your app to automate it. 2. Should not be locked into a specific language or framework. 3. Should be open source and 4. Should not reinvent the wheel when it comes to automation APIs.
If you are mobile app software test engineer, Appium can makes your mobile app regression testing task easy. Specially for large mobile apps where continues apps are updating with new features and functionalities. Another main benefit of using appium for mobile app automation is : It supports bellow given multiple platforms and languages. Also you can use any testing framework.
Multiple Platforms Support
Appium support bellow given different platforms.
Multiple Languages Support
Appium supports bellow given languages with theSelenium WebDriver API and language-specific client libraries.
Based on the OLTP or OLAP the method of database testing will differ. For OLTP (Online Transaction Processing) you are checking the data is inserted/deleted/modified properly or not.
Database testing is a kind of software testing where you make sure that what you enter on web forms that information is being saved in to database as expected. Data constraints are maintained while doing migration or upgradation of your web application.
- Checking whether the data submission you made through an UI application is stored in the database. This is referred as Database testing.
- Data warehouse testing : Feed files from an UI application are fed into a system and data will be sent to different layers after applying business rules using ETL tools like Informatica. QA team will provide complete testing from file level to the table level using Unix scripts and SQL queries.
- Data base testing generally deals with the following:
- Verifying the integrity of UI data with Database Data
- Verifying whether any junk data is displaying in UI other than that stored in database.
- Verifying execution of stored procedures with the input values taken from the database tables.
- Verifying Data Migration.
- Data validity testing.
- Data Integrity testing
- Performance related to data base.
- Testing of Procedure,triggers and functions.
- For data validity testing you should have knowledge of SQL
- For data integrity testing you should know about referential integrity and different constraint.
- For performance related things you should have idea about the table structure and design.
- Validating the mapping of data from front end(UI) to back end(DB/Table).
- Validating the Data Integrity.( ensure that data are same in all related tables)
- Validating ACID(Atomicity, Consistency, Isolation, Durability property with all transactions.
- Validating Constraints and performance of database.
- Diagnosis of specific database on server
- Industry standard benchmarks testing of databases
- Managing and governing database resources and their utilization
Database Comparison TestingYou can compare the two different data sets to verify the integrity of the data and ensure accurate reporting.
- Run queries to look up if the data has been processed correctly or not
- Detailed drill-down information for database testing errors and data divergence
- Data mirroring with different data versions
Database ValidationYou can authenticate various databases and their quality for further usage and analysis.
- Validation of database server configuration
- Verification of database server load
- Determining and authentication of database end users
- Streamline and accelerate data validation
- Lesser data anomalies due to early detection of defects
- Better re-use of test components, reducing time-to-market and simplifying the test management process
Points of Difference Between Manual and Automation Testing. Manual testing and automation testing can be considered to be the pillars of software testing. They form the core of this field. Testing is carried out with a combination of both automation and manual. To be an expert in either of these or both, one would have to undergo software testing in Pune.
For now, we will look at the points of difference between these two. This would give you a fair bit of idea, regarding both these.
- It is carried out manually where a tester himself executes all the steps e.g. in a test case.
- Manual testing is the most basic step. Without carrying out manual testing, one cannot proceed towards automation testing.
- In this kind of testing, testers carry out random testing in order to discover bugs.
- Due to error guessing technique followed in manual testing, we get more bugs in general than that of automation testing.
- Time consumed is more in this case.
- Manual testing is generally carried out in a sequential manner.
- Carrying out regression testing is tedious in manual testing.
- More testers are required in Manual Testing on the grounds that in this test cases should be executed manually.
- The results are with lesser accuracy as manual errors come into picture.
- Batch testing cannot be performed over here.
- The reliability of manual testing is considered to be less.
- Programming is not involved in manual testing.
- Manual testing is considered to be of lower quality.
- Manual testing can be carried out without the use of any tool.
- All the well known stages of STLC like test planning, test deployment, result investigation, test execution, bug tracking and reporting tools clearly come under the class of Manual Testing and done effectively by human endeavors.
- It is carried out with the help of various automation tools like QTP, Selenium etc. There are many available in the market. Based on factors like requirement and budget, one needs to chose one.
- Automation testing can be said to be an integral and continuous part of manual testing.
- In case of automation testing, we do the testing of the application via running the scripts. Tools allow us to write scripts and execute them as well.
- Automation testing is more useful when repetitive functionalities of the software are to be tested.
- Time consumed is less as expected.
- Automation testing is carried out on a number of machines at one time.
- Regression testing is easier to carry out during automation testing due to the use of tools.
- Fewer testers are required in Automation Testing on the grounds that in this test cases should be executed by utilizing Automation Tools.
- Results are highly accurate as manual errors are out of question over here.
- Multiple kinds of batch testing can be carried out in case of automation testing.
- Automation testing is considered to be more reliable amongst the two as there is involvement of tools in the same.
- Programming is the heart of automation testing as scripts need to be written in various languages like Perl, python etc.
- Automation testing is considered to be of higher quality.
- Tools form an integral part of automation testing.
- In Automation Testing all the popular stages of STLC are finished by different open sources and paid tools like Selenium, J meter, QTP, Load Runner, Win Runner etc.
Why Automation Testing?In software testing, test automation is the use of special software (separate from the software being tested) to control the execution of tests and the comparison of actual outcomes with predicted outcomes.
Salary trend in recent years:
Do's and Dont's of Interfaces in Java. Is it allowed to do WebDriver driver = new WebDriver() in Selenium ?
To reverse a List in Java e.g. ArrayList or LinkedList, you should always usethe Collections.reverse() method. It's safe and tested and probably
perform better than the first version of the method you write to reverse an ArrayList in JavaIn this tutorial, I'll show you how to reverse an ArrayList of String using recursion as well.
enumerate(sequence, start=0)Return an enumerate object. sequence must be a sequence, an iterator, or some other object which supports iteration. The next() method of the iterator returned by enumerate() returns a tuple containing a count (from start which defaults to 0) and the values obtained from iterating over sequence:
>>> seasons = ['Spring', 'Summer', 'Fall', 'Winter']
[(0, 'Spring'), (1, 'Summer'), (2, 'Fall'), (3, 'Winter')]
>>> list(enumerate(seasons, start=1))
[(1, 'Spring'), (2, 'Summer'), (3, 'Fall'), (4, 'Winter')]
def enumerate(sequence, start=0):
n = start
for elem in sequence:
yield n, elem
n += 1
New in version 2.3.
eval(expression[, globals[, locals]])The arguments are a Unicode or Latin-1 encoded string and optional globals and locals. If provided, globals must be a dictionary. If provided, locals can be any mapping object.
The expression argument is parsed and evaluated as a Python expression (technically speaking, a condition list) using the globals and locals dictionaries as global and local namespace. If the globals dictionary is present and lacks ‘__builtins__’, the current globals are copied into globals before expression is parsed. This means that expression normally has full access to the standard __builtin__ module and restricted environments are propagated. If the locals dictionary is omitted it defaults to the globals dictionary. If both dictionaries are omitted, the expression is executed in the environment where eval() is called. The return value is the result of the evaluated expression. Syntax errors are reported as exceptions. Example:
>>> x = 1
>>> print eval('x+1')
This function can also be used to execute arbitrary code objects (such as those created by compile()). In this case pass a code object instead of a string. If the code object has been compiled with 'exec' as the mode argument, eval()‘s return value will be None.
Hints: dynamic execution of statements is supported by the exec statement. Execution of statements from a file is supported by the execfile() function. The globals() and locals() functions returns the current global and local dictionary, respectively, which may be useful to pass around for use by eval() or execfile().
See ast.literal_eval() for a function that can safely evaluate strings with expressions containing only literals.
execfile(filename[, globals[, locals]])This function is similar to the exec statement, but parses a file instead of a string. It is different from the import statement in that it does not use the module administration — it reads the file unconditionally and does not create a new module. 
The arguments are a file name and two optional dictionaries. The file is parsed and evaluated as a sequence of Python statements (similarly to a module) using the globals and locals dictionaries as global and local namespace. If provided, locals can be any mapping object. Remember that at module level, globals and locals are the same dictionary. If two separate objects are passed as globals and locals, the code will be executed as if it were embedded in a class definition.
If the locals dictionary is omitted it defaults to the globals dictionary. If both dictionaries are omitted, the expression is executed in the environment where execfile() is called. The return value is None.
Note The default locals act as described for function locals() below: modifications to the default locals dictionary should not be attempted. Pass an explicit locals dictionary if you need to see effects of the code on locals after function execfile() returns. execfile() cannot be used reliably to modify a function’s locals.
- delattr(object, name)
- The arguments are an object and a string. The string must be the name of one of the object’s attributes. The function deletes the named attribute, provided the object allows it. For example, delattr(x, 'foobar') is equivalent to del x.foobar.
- class dict(**kwarg)
- class dict(mapping, **kwarg)
- class dict(iterable, **kwarg)
- Create a new dictionary. The dict object is the dictionary class. See dict and Mapping Types — dict for documentation about this class.
- For other containers see the built-in list, set, and tuple classes, as well as the collections module.
How to edit a system variable
1) It handles AJAX/page load delays automatically. No need for explicit wait statements in code.
2) It can handle applications with dynamically generated ids. It has easy identification mechanisms to relate one element to another (example. click the delete button near user "Ram"). ExtJS, ZkOSS, GWT, SmartGWT etc. have been handled via Sahi.
) can also be added to this list. Some good points about Sahi:
Alert alert = driver.switchTo().alert();
1) Think about a big data problem you want to solve.
Traditionally, big data has been described by the "3Vs": Volume, Variety, Velocity. What is a real analytics problem that is best solved using big data tools? What kind of metrics do you want to capture? The most common use cases today involve scraping large volumes of log data. This is because log data tends to be very unstructured, can come from multiple sources, and especially for popular websites, can be huge (terabytes+ a day). Thus having a framework for performing distributed computing tasks is essential to solve this problem.
2) Download and setup your big data solution
The easiest thing to do is just use a pre-built virtual machine which just about any Hadoop provider makes freely available , and then run it locally. You could also use a service like Amazon Web Services as well. Most commonly people will use the map-reduce framework and Hive for crunching large volumes of data. Since you're just looking to learn, you wont need terabytes, or even gigabytes of data to play with so getting access to a 100 node cluster won't be a priority. Although there are certainly challenges to overcome and understand once you start to get into multi-node environments.
3) Solve your big data problem
Once you have your environment set up, get to coding! There is plenty of documentation and tutorials out there to reference and learn from . And really, just type questions into Google and you'll get a ton of resources. Read up on the tools and understand how the technology can be applied to solving for your use case. Think about the kinds of metrics you're looking to capture within your data. Think about what kind of map-reduce programs you will need to write to capture the data you want to analyze. Think about how you can leverage something like Hive or Pig to do a lot of the heavy number crunching. Something that probably wont be apparent in a single node environment but is a real world problem in any distributed environment is understanding data skew and how it affects performance .
4) Analytics & Visualization: The sexy side of Big Data & BI
Now that you've solved your big data problem and have your data in a manageable format, its time to dazzle your boss with some sweet reports. Most enterprise architectures that leverage Hadoop will still have a SQL Database for storing and reporting data out of Hadoop (you will quickly come to realize that map-reduce has a very long response time, even on small data sets). Loading data out of Hadoop and into a SQL database is good practice for the real world but for the sake of learning the big data side of it, not necessary. There's several (free) reporting tools out there that will connect to Hadoop/Hive directly and will work fine for learning purposes . If you want to be the cool kid on the block (and super employable at large companies), I would pick up Tableau (product) . You could also lend yourself into picking up some predictive modeling and machine learning skills with some of the tools that are out there , and maybe start calling yourself a data scientist
- MapReduce is the Google paper that started it all (). It's a paradigm for writing distributed code inspired by some elements of functional programming. You don't have to do things this way, but it neatly fits a lot of problems we try to solve in a distributed way. The Google internal implementation is called MapReduce and Hadoop is it's open-source implementation. Amazon's Hadoop instance is called Elastic MapReduce (EMR) and has plugins for multiple languages.
- HDFS is an implementation inspired by the (GFS) to store files across a bunch of machines when it's too big for one. Hadoop consumes data in HDFS (Hadoop Distributed File System).
- Apache Spark is an emerging platform that has more flexibility than MapReduce but more structure than a basic message passing interface. It relies on the concept of distributed data structures (what it calls RDDs) and operators. See this page for more:
- Because Spark is a lower level thing that sits on top of a message passing interface, it has higher level libraries to make it more accessible to data scientists. The Machine Learning library built on top of it is called MLib and there's a distributed graph library called GraphX.
- Pregel and it's open source twin Giraph is a way to do graph algorithms on billions of nodes and trillions of edges over a cluster of machines. Notably, the MapReduce model is not well suited to graph processing so Hadoop/MapReduce are avoided in this model, but HDFS/GFS is still used as a data store.
- Zookeeper is a coordination and synchronization service that a distributed set of computer make decisions by consensus, handles failure, etc.
- Flume and Scribe are logging services, Flume is an Apache project and Scribe is an open-source Facebook project. Both aim to make it easy to collect tons of logged data, analyze it, tail it, move it around and store it to a distributed store.
- Google BigTable and it's open source twin HBase were meant to be read-write distributed databases, originally built for the Google Crawler that sit on top of GFS/HDFS and MapReduce/Hadoop.
- Hive and Pig are abstractions on top of Hadoop designed to help analysis of tabular data stored in a distributed file system (think of excel sheets too big to store on one machine). They operate on top of a data warehouse, so the high level idea is to dump data once and analyze it by reading and processing it instead of updating cells and rows and columns individually much. Hive has a language similar to SQL while Pig is inspired by Google's Sawzall - . You generally don't update a single cell in a table when processing it with Hive or Pig.
- Hive and Pig turned out to be slow because they were built on Hadoop which optimizes for the volume of data moved around, not latency. To get around this, engineers bypassed and went straight to HDFS. They also threw in some memory and caching and this resulted in Google's Dremel (), F1 ( ), Facebook's Presto ( ), Apache Spark SQL ( ), Cloudera Impala ( ), Amazon's Redshift, etc. They all have slightly different semantics but are essentially meant to be programmer or analyst friendly abstractions to analyze tabular data stored in distributed data warehouses.
- Mahout () is a collection of machine learning libraries written in the MapReduce paradigm, specifically for Hadoop. Google has it's own internal version but they haven't published a paper on it as far as I know.
- Oozie is a workflow scheduler. The oversimplified description would be that it's something that puts together a pipeline of the tools described above. For example, you can write an Oozie script that will scrape your production HBase data to a Hive warehouse nightly, then a Mahout script will train with this data. At the same time, you might use pig to pull in the test set into another file and when Mahout is done creating a model you can pass the testing data through the model and get results. You specify the dependency graph of these tasks through Oozie (I may be messing up terminology since I've never used Oozie but have used the Facebook equivalent).
- Lucene is a bunch of search-related and NLP tools but it's core feature is being a search index and retrieval system. It takes data from a store like HBase and indexes it for fast retrieval from a search query. Solr uses Lucene under the hood to provide a convenient REST API for indexing and searching data. ElasticSearch is similar to Solr.
- Sqoop is a command-line interface to back SQL data to a distributed warehouse. It's what you might use to snapshot and copy your database tables to a Hive warehouse every night.
- Hue is a web-based GUI to a subset of the above tools -
Corona, a new scheduling framework that separates cluster resource management from job coordination. Corona introduces a cluster managerwhose only purpose is to track the nodes in the cluster and the amount of free resources. A dedicated job tracker is created for each job, and can run either in the same process as the client (for small jobs) or as a separate process in the cluster (for large jobs).
One major difference from our previous Hadoop MapReduce implementation is that Corona uses push-based, rather than pull-based, scheduling. After the cluster manager receives resource requests from the job tracker, it pushes the resource grants back to the job tracker. Also, once the job tracker gets resource grants, it creates tasks and then pushes these tasks to the task trackers for running. There is no periodic heartbeat involved in this scheduling, so the scheduling latency is minimized