Primary strategy
The purpose of our site search is to assist visitors acquire the knowledge they need as efficiently as possible. Unlike internet search engines, we only have to serve results for our 'family' of sites. However, with more than 150,000 pages, this is still a complicated task, made even more challenging by our distributed authorship model and the disparate content sources (i.e. differing quality and completeness of metadata). In order to do search better we should focus our efforts in our strategy will focus on four main areas of improvement:
Context
...
We need to know more about our users, both as groups and as individuals, and their needs. Only then can we please most people most of the time. This includes The starting point (and easiest to do) is reflect the site location into the search experience. This would mean a different design/UI and even different results for a user searching from 'Future Students' than one searching from 'Research'.
Taking this further, we could the type of visitor (in broad groups based on their role/needs/purpose for visiting), location on our site, and preferences (explicit or implicit based on previous visits.Content: More content is not better, rather we need to work at relevance. We must index such as international/domestic) or their preferences (maybe UG cf PG). This is the starting of personalisation. Maybe having a different search for the intranet, where we know that users are authenticated as staff (and some PG students) than we have on the public site.
Question: What could we do differently if we collected some basic information like role (e.g. AUT example under 'Customise this page')?
Content
Search needs must feature in our content work (from strategy, through training and in writing) if we are to make real improvements to the relevance of search results. We must:
- Index and enrich the right content (not all of it)
- Establish collections properly, being groups of all similar resources so we can limit search to some results. Funnelback allows us to define collection, document/page/file groups with a common thread. Subject areas and UG degrees could be two collections, in turn grouped into a meta-collection 'UG study things'. We could search only over this metacollection on the KYM landing page or an UG study hub. We could also prioritise results differently (e.g. policy documents higher for staff when logged in but not so much on the public site)
...
- . So, combined with some site context information or a user-cookie value, we can improve relevancy without expensive content work.
- Collapse results where the content is almost the same. This applies, albeit not directly, to the recruitment app.
- Manage our recommend results better (e.g. we only/always promote courses currently, but surely we should sometimes promote a subject area or even a degree)
- Add promoted results for common queries
...
- , known in FB as 'best bets' (e.g. show maps and directions when people search for campuses or rooms)
- Improve the visual catalog of the most important items (richer snippets)
...
- Eliminate junk from the default search experience, or at least lower the priority so it doesn't feature in the first page of results.
Question: What collections would give us the greatest benefit to users of search?
SEO
Metadata
...
We need more and better structured information about our content in order to substantively improve search result relevancy. Given that FB works differently than GSA did, we might need to alter how we create pages, including the training. The minimum probably includes:
- A title
- Description of the contents
- A number of descriptive keywords;
- Some timestamps, highlighting the content’s lifecycle (e.g. created, published, updated, revised and finally possibly archived).
- Status of availability, such as public, access-controlled, valid, outdated, archived, etc.
- Its canonical address. That is the original and primary URL
...
- Keyword used in content
Question: How good is our content/metadata currently? How much work is it to improve it?
UX
We need well researched, designed and built interfaces, with user feedback to enable continuous improvement. How come we don't ask where search found what the user was This would include different interfaces for different parts of our site, where we can ask ourselves of each instance , "Who is searching here, what categories of information are they looking for and what are they likely to do with this information when they find it?". Compare this approach to our main search results page where we currently show such a course-centric set of filter tags for many search results, despite most results not being courses.
Feedback might be as simple as asking somebody who used search if he/she found what they were looking for? We should be continually gathering feedback, analysing, and refining our search experience and index.Look regularly at our top queries, abandoned queries and zero result queries.
Question: How do you feel about a site asking you if the content was useful? How likely are you to answer honestly?
Secondary strategy
Search service team
We need to make search a team priority, both the ensure it is ongoing rather than intermittent, but also because it requires more capability than one person possesses.
The most important effort an organisation can do to improve its search is to appoint an owner of search. This means that a owner of search must have time set aside to work with search. A few hours a week is much better than nothing. And even more important: to work with search is a long-term work, certainly not a project.
The roles and competencies in search’s service team should consist of:
- (Business) owner of search
- Search technician
- Search editor and/or Information specialist
- Search analyst
- Search support
Question: Who would you nominate as the service owner for web site search?
Evaluation of search
Search fill its purpose when it deliver the right information, is fast about it and always available. To satisfy these requirements, the function of search is to be tested regularly and tests should be documented in test plans. Below are some of the tests that are appropriate:
- Search loads quickly, tested with Google Pagespeed Insights, with a minimum of 80/100.
- The response time of a query should be about 0.1 seconds, but never longer than 1 second, measured at the user interface.
- Search will be available 24/7 (around the clock seven days a week). Monitored by, for instance, Pingdom or Uptimerobot.
- Size of search indexes. Among other things, to see if more or fewer documents are indexed, which can provide warning signs in advance, help being proactive.
- Search’s user interfaces are accessible, tested with the W3C Validator.
- Search’s user interfaces are usable, tested against webbriktlinjer.seand W3C:s WCAG 2.0 at level AA.
- Survey the satisfaction of users.
- Reviewing search statistics and/or performing search analytics, to gain insight into how users are searching.
...
Insights
- We need to acknowledge site location/context for the search far, far more than presently, when most things default to a whole-of-site search.
- Collections are powerful, not only for grouping results, but also to power contextual searching.
- We need to internationalise more of the key pages, then use the key cookie settings to display the correct 'version'.
Questions
- Should we continue to use the 'promoted results' for courses, or transition to subjects? Or degrees?
- Should we be adding key words to subjects and/or subject areas? Can a subject area 'inherit' the keywords of it's component subjects?
- How do we strike a balance between what the user wants to see and what we want the user to see?
https://webstrategyforeveryone.com/example-enterprise-search-strategy/
One helpful model is to consider four different modes used when searching for information
- Known knowledge: Those searching for what is already known is easy to service because the user knows what they want, can express this well and has an idea of where to start looking. E.g. I want to learn my major requirements for the second and third year of my UG degree.
- Exploring: User has an idea of what they want to know, but may have difficulty expressing it, or cannot use the correct terms. The user often know when they have found the right content, but has no knowledge if the result is sufficient. E.g. I want to complete part of my study overseas.
- Do not know what they need: Users often do not know exactly what they need to know. They may believe that they need to know one thing, when in reality it is something else. Sometimes, they visit an information source without any specific purpose. E.g. I am looking for something to study, but don;t really know what I should do.
- Retrieve: User is looking for information that they have prior knowledge of and maybe can even remember where they saw it recently, which source of information it were or they have an idea where to find the content.
http://www.galaxyconsulting.net/images/White_Paper_April_2014.pdf
- Define specific objectives for specific search 'tools'
- Who is searching?
- What categories of information are they looking for?
- What are they likely to do with the information when they find it?
- Define logical types of searches
- People search
- Product search
- Customer search
- Define the desired scope and inventory repositories
...
- Look regularly at our:
- Top Xx queries: To gain an insight into how the experience of search is for a large part of the users. And also, if the relevance model can be improved and what content is most in demand.
- Abandoned queries:
- Zero result queries: To identify what content is missing, find synonyms to use, understand which abbreviations are used and discover alternative spellings.
Question: How would you want search evaluated?
Acknowledging dissatisfaction
Users regularly complain about the relevancy of the current search results, both before and after the move to Funnelback. While one aspect of this is personal preference, the Web Team acknowledge that search has been unloved (i.e. had little attention lavished on it) and would benefit from an investment in time and resource. The team is always keen to hear of specific examples where search doesn't work or gives poor results. While we can't always alter that specific result set, we do analyse to try and understand the underlying issues, as these are what we should work on. So, please forward any specific examples of where search doesn't work for you and we will look into them.
Question: What is your top gripe?