Forms that work: Designing web forms for usability

Research challenges

Although forms are everywhere, there is surprisingly little academic research done on them. Organisations run usability tests, a/b tests and do other types of research - but don't publish it very much. If you are looking for a research challenge, here are some ideas.

Research problems in interaction design and graphic design

Forms in two or more languages: 

  • What's the best way of showing users that an extra language is available? (example: Spanish / English)
  • If the extra language works in a different direction (example: Arabic/English, Hebrew/English), what's the best way to deal with the two languages?
  • If the extra language has options for directions, what's the best way to deal with it? (example: Japanese and Chinese have traditionally been written top-to-bottom and right-to-left, but many websites now write them left-to-right as in English)
  • If the language does not currently have a Unicode or other coded format, what's the best way of dealing with that?
  • What is the best way of getting a date into a form, particularly when it may be unclear whether the calendar is a Gregorian or Islamic one?

Problems in layout and design

  • Do users understand links that have an attached icon to indicate 'opens in a new window'? Do they care about the icons? If they see them, do they know what they mean?
  • Is it important to underline links? Do underlined links really make reading for difficult for dyslexia? If so, what types of dyslexia? If you don't underline the links, what is the best way to show that a link is clickable?
  • In HTML markup, a 'button' is different to a 'link' and there is a general rule 'buttons do things, links go places'. But in forms design, we sometimes need people to read a page of text and then more to another page of form. In my experience, users prefer to have a consistent navigation from page to page (click a button to finish the page that they are on and to get to the next page they have to work with, whether the page they were on was pure text, purely forms widgets, or a mixture of both) than to swap from clicking a link of the page was pure text to clicking a button if the page had forms widgets on it. Is this true? Do users care about whether 'Next' is styled as a button or a link? Do they even notice?
  • We often see the assertion that ALL CAPS writing is harder to read because the letter shapes are more uniform. This assertion was challenged by Larson in his article The Science of Word Recognition. A recent discussion on Twitter about this topic led to a rejoinder 'Larson was wrong'. What is the latest thinking on this topic? Has any research been done on this topic since 2004? If so, does it show that Larson was wrong? If so, how was he wrong? 
  • Bargas-Avila et al created some 'Web form guidelines' in 2011. Some of these are widely supported; others are dubious.
    For example, one guideline requires error messages to be written in familiar language - and I do not know anyone who advocates deliberately writing error messages in confusing or unfamiliar language, so that guideline is uncontroverial.
    Another guideline requires dates to captured using drop-down boxes, but more recent research by the Government Digital Service has discovered that this approach is ineffective for users with low digital skills: Asking for a date of birth
    There has been one attempt to evaluate the effectiveness of these guidelines as a whole: Seckler et al, 2014 that confirmed that following them all at once is likely to result in a better form, but no replication of that study and no attempt to discover which guidelines have the most effect, nor (so far as I know) to compare these guidelines with any other set. 

Research problems in writing and content design

  • If you are writing to the user as 'you', should answers that are provided as options with radio buttons be written as 'I' or 'you'?
  • We frequently see a recommendation of 75 characters per line. Is this still the case for responsive layouts? Is there a longer length that might work better as the basis? 
  • Previous research suggests to me that instructions are easier to read, and processed more accurately, when writers strictly ensure that any 'if/then' constructions (equivalently, 'do this/unless' and 'do this/however, if) are strictly written with the 'if' before the 'then'. So 'if/ then' is OK, but 'do this/unless' is wrong.  In official writing, 'do this/unless' and 'do this / if this applies' constructions are common - but I consider, wrongly written. Is this true? Does rewriting instructions to put 'if' clauses before 'then' improve them?
  • We see recommendations to 'front load' links with the action word 'Apply now if you want to win' rather than with the 'if' clause at the front: 'if you want to win, apply now'. 
  • Is it better to word a question so that the user chooses from answers 'yes' and 'no', or is it better to word it so that the user chooses from differently-worded answers?

 

Research problems in service design and business process design

  • The UK Government Digital Service has a Service Manual that provides information and guidance for teams developing government services. If a new team starts to design a form based on the advice in the manual, are they able to produce a good form?  

 

Research problems in the design of research

  • Leisa Reichelt (Australia) research question: is it better to use ALL CAPs or normal writing on post-it notes. Hypothesis is: the increased legibility of careful handwriting necessary for ALL CAPS outweighs the decreased legibility of ALL CAPS 
  • A person who has never conducted any research, or is new to user research, decides to start doing usability testing by exactly following the advice in the Service Manual. Another person sets out to start to do usability testing by exactly following the advice in Steve Krug's popular book about usability testing 'Rocket Surgery Made Easy'. Who does better? Who learns more? 
  • Or, for either of those scenarios: if someone sets out to learn how to do usability testing following just one of those sets of guidance, do they conduct a satisfactory usability test?
  • A team decides to have stakeholders observing a usability test. There are various sets of 'rules for observers' available. Which set of rules works best? Does having a set of rules improve or undermine the observers' experience? Does it improve or undermine the quality of stakeholders' decision-making after the test?
  • In a landmark study, Tullis and Stetson 2004 looked at the use of SUS (System Usability Scale) against some other questionnaires. They concluded that SUS was the best of the questionnaires available. Is this still the case?
  • So far as I know, SUS has never been subjected to full cognitive interviewing. I suspect that cognitive interviewing would reveal some difficulties with the statements. In a brief demonstration of cognitive interviewing during my UXPA presentation on surveys, attendees tried a little cognitive interviewing on one statement that suggested that there may be some problems. 
  • The Seckler et al study mentioned above makes use of the Forms Usability Scale, an attempt to create a questionnaire to evaluate forms that draws on some of the ideas in our book. This has also not been tested using cognitive interviewing, nor compared with SUS or any other of the standard usability questionnaires. 
  • The Forms Usability Scale includes an item 'in general I was pleased with the form', which suggests that users have opinions about forms (whereas in our book, we make the point that users don't care much about forms but instead care whether or not they were able to achieve their overall goal - the form is a barrier between them and what they are trying to do, not an entity in its own right). If users have a higher-level task, such as 'find and book a hotel for your upcoming vacation', to what extent do they notice whether the forms they use for searching and booking a hotel are good forms or not? What do they perceive as 'good' when undertaking that task? If they do rate the form as 'good', what aspects of the form do they consider in their ratings? Do those aspects bear any resemblance to the topics considered in the Forms Usability Scale?

 

Replications and challenges of other research

Very little research of any kind is done on forms, and so replications / challenges to research are even less common. Any replication of any of the results cited on above or elsewhere would be extremely useful. 

One rare example or replication/challenge: there is a very famous UXmatters on label placement in forms (Penzo, 2006) that has since been challenged by Das, McEwan and Douglas 2008 but that was cited as a replication in the Handbook of Human Factors in Web Design. An up-to-date study that exampled the Penzo claims would be useful. 

Baymard Institute, based in Copenhagen, does fine research on topics in e-commerce, much of which touches on forms topics. Replication or challenge of any of their results would be useful.  

 

Literature search and bibliography

This page cites some of the available references in forms design. I do not know of a published bibliography In English. My article the top 5 books on forms design compares five books available in 2010 and mentions whether they include references. There is a bibliography in Dutch. A literature search, particularly on articles published since 2010, would be incredibly useful. 

 

 

Page

Edit
Rename
Versions

Site

Changes
Index
Search
Templates

User

Log In

 
 

Last Modified 2016-08-25