Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 14 Current »

Primary strategy

The purpose of our site search is to assist visitors acquire the knowledge they need as efficiently as possible. Unlike internet search engines, we only have to serve results for our 'family' of sites. However, with more than 150,000 pages, this is still a complicated task, made even more challenging by our distributed authorship model and the disparate content sources (i.e. differing quality and completeness of metadata). In order to do search better our strategy will focus on four main areas of improvement:

Context

We need to know more about our users, both as groups and as individuals, and their needs. Only then can we please most people most of the time. The starting point (and easiest to do) is reflect the site location into the search experience. This would mean a different design/UI and even different results for a user searching from 'Future Students' than one searching from 'Research'.

Taking this further, we could the type of visitor (in broad groups such as international/domestic) or their preferences (maybe UG cf PG). This is the starting of personalisation. Maybe having a different search for the intranet, where we know that users are authenticated as staff (and some PG students) than we have on the public site.

Question: What could we do differently if we collected some basic information like role (e.g. AUT example under 'Customise this page')?

Content

Search needs must feature in our content work (from strategy, through training and in writing) if we are to make real improvements to the relevance of search results. We must:

  • Index and enrich the right content (not all of it)
  • Establish collections properly, being groups of all similar resources so we can limit search to some results. Funnelback allows us to define collection, document/page/file groups with a common thread. Subject areas and UG degrees could be two collections, in turn grouped into a meta-collection 'UG study things'. We could search only over this metacollection on the KYM landing page or an UG study hub. We could also prioritise results differently (e.g. policy documents higher for staff when logged in but not so much on the public site). So, combined with some site context information or a user-cookie value, we can improve relevancy without expensive content work.
  • Collapse results where the content is almost the same. This applies, albeit not directly, to the recruitment app.
  • Manage our recommend results better (e.g. we only/always promote courses currently, but surely we should sometimes promote a subject area or even a degree)
  • Add promoted results for common queries, known in FB as 'best bets' (e.g. show maps and directions when people search for campuses or rooms)
  • Improve the visual catalog of the most important items (richer snippets)
  • Eliminate junk from the default search experience, or at least lower the priority so it doesn't feature in the first page of results.

Question: What collections would give us the greatest benefit to users of search?

Metadata

We need more and better structured information about our content in order to substantively improve search result relevancy. Given that FB works differently than GSA did, we might need to alter how we create pages, including the training. The minimum probably includes:

  • A title
  • Description of the contents
  • A number of descriptive keywords; 
  • Some timestamps, highlighting the content’s lifecycle (e.g. created, published, updated, revised and finally possibly archived).
  • Status of availability, such as public, access-controlled, valid, outdated, archived, etc.
  • Its canonical address. That is the original and primary URL

Question: How good is our metadata currently? How much work is it to improve it?

UX

We need well researched, designed and built interfaces, with user feedback to enable continuous improvement. This would include different interfaces for different parts of our site, where we can ask ourselves of each instance , "Who is searching here, what categories of information are they looking for and what are they likely to do with this information when they find it?". Compare this approach to our main search results page where we currently show such a course-centric set of filter tags for many search results, despite most results not being courses.

Feedback might be as simple as asking somebody who used search if he/she found what they were looking for? We should be continually gathering feedback, analysing, and refining our search experience and index.

Question: How do you feel about a site asking you if the content was useful? How likely are you to answer honestly?


Secondary strategy

Search service team

We need to make search a team priority, both the ensure it is ongoing rather than intermittent, but also because it requires more capability than one person possesses. 

The most important effort an organisation can do to improve its search is to appoint an owner of search. This means that a owner of search must have time set aside to work with search. A few hours a week is much better than nothing. And even more important: to work with search is a long-term work, certainly not a project.

The roles and competencies in search’s service team should consist of:

  • (Business) owner of search
  • Search technician
  • Search editor and/or Information specialist
  • Search analyst
  • Search support

Question: Who would you nominate as the service owner for web site search?

Evaluation of search

Search fill its purpose when it deliver the right information, is fast about it and always available. To satisfy these requirements, the function of search is to be tested regularly and tests should be documented in test plans. Below are some of the tests that are appropriate:

  • Search loads quickly, tested with Google Pagespeed Insights, with a minimum of 80/100.
  • The response time of a query should be about 0.1 seconds, but never longer than 1 second, measured at the user interface.
  • Search will be available 24/7 (around the clock seven days a week). Monitored by, for instance, Pingdom or Uptimerobot.
  • Size of search indexes. Among other things, to see if more or fewer documents are indexed, which can provide warning signs in advance, help being proactive.
  • Search’s user interfaces are accessible, tested with the W3C Validator.
  • Search’s user interfaces are usable, tested against webbriktlinjer.seand W3C:s WCAG 2.0 at level AA.
  • Survey the satisfaction of users.
  • Reviewing search statistics and/or performing search analytics, to gain insight into how users are searching. Look regularly at our:
    • Top Xx queries:  To gain an insight into how the experience of search is for a large part of the users. And also, if the relevance model can be improved and what content is most in demand.
    • Abandoned queries: 
    • Zero result queries: To identify what content is missing, find synonyms to use, understand which abbreviations are used and discover alternative spellings.

Question: How would you want search evaluated?


Acknowledging dissatisfaction

Users regularly complain about the relevancy of the current search results, both before and after the move to Funnelback. While one aspect of this is personal preference, the Web Team acknowledge that search has been unloved (i.e. had little attention lavished on it) and would benefit from an investment in time and resource. The team is always  keen to hear of specific examples where search doesn't work or gives poor results. While we can't always alter that specific result set, we do analyse to try and understand the underlying issues, as these are what we should work on. So, please forward any specific examples of where search doesn't work for you and we will look into them.

Question: What is your top gripe?


  • No labels