What To Test And What To Avoid
While writing e2e tests for a large app, it is important to figure out which portions need priority over others and which can be excluded. We need not write e2e tests for everything as too much testing can soon become unmanageable and makes the whole build bulky leading to lengthy build times. Keep them to a limit that satisfies most of the use cases and gives you confidence during deployments. Following points can help you identify a best approach:
#Identify independent modules
First step is to identify independent modules of your application. For instance in a note taking app, new registrations, login process, email/password update, listing and filtering notes, saving filters, taking and saving notes, editing notes, sharing notes are some of the modules that compose the application.
#Prioritizing features or test cases from modules
Next step is to select what test cases need to be accompanied into e2e tests. From our example note taking app, for writing tests for 'listing and filtering notes' module, we'd first think about the various test cases. Test cases are all the features and functionalities one likes to see working. X number of notes load on listing, Paging appears in bottom with next - previous, Every note links to editor with it's id, Applying date filter narrows down notes are some of the test cases for this module. There can be many more but these are the ones that have to be tested in priority to be able to present a working feature. Take a look at following points:
#Identifying the nature of a feature
You should prioritize test cases by identifying the nature of a feature. If the feature is very critical for the application to succeed, write test for that first and cover many test cases that would satisfy the feature's importance. For instance, in the note taking app, auto saving, loading and editing are some features that are critical and should be covered extensively.
#Having lesser test cases isn't that bad
Features and functionalities that are less critical can have less test cases initially. Test cases that confirm those features work may be sufficient. Once time allows, more test cases can be added later. Some testing is much better than no testing and gives you lot of confidence on overall application functionality.
#Test coverage isn't everything
You shouldn't think too much about the test coverage. You know your application better than anyone else. Decide on your own what gives you confidence during deployments and test those parts first. Try covering many modules with lesser test cases rather than covering few modules with exhaustive test cases. Remember you can always add tests as you add new features, refactor things or found a bug that wasn't previously tested.
Don't test things you're not responsible for or aren't in your control. Imagine a form in some app that takes user input, sends user an email and shows a success message only if email has gone. If you're testing it, just write that the form can be submitted and a success message is displayed. Don't go into verifying the email delivery because you are not controlling it. Checking on a mailbox can lead to lot of complexity and fragile test cases that may break often.
Some applications make use of google maps, embedded videos or podcast. We suggest against verifying that you can play a video by clicking it's play button or can navigate through the map. Those embeds are usually from companies known for stability and break very rarely. Not only testing those embeds is tricky, it's also useless because they're sure to work unless you've a faulty integration in the first place. Just test them manually once after embedding. If your tests need to validate that those embeds are on page, give containers of those embeds a
data-testidand verify those elements exists on page.
You shouldn't use ZWL for anything other than writing end to end tests for your application. It is not supposed to perform management tasks like data entry or anything of that sort.