“We use Google…to find out about our own company”

Using 3rd party tools to find what I wantYou wouldn’t believe the number of times I have heard people say that when they want to find out about their own company, they use Google

Case in point – I was at a well-known appliance store the other day, that has branches throughout the country.I asked the girl at the checkout whether there was a store in one particular city. While she looked furtively at her screen, I took a peek over her shoulder. It was the company’s intranet. I advised her to open up a new tab in her browser, go to Google, and type in the name of the store plus the word “branches”. She obediently followed my instructions, and two minutes later she was able to give me an answer.

I won’t talk about the magic that Google performs to bring you the information that you want. I do want to talk, however, about why people are going to an outside facility rather than using the companies own resource…findability  and usability.

Findability does not just mean being able to search for something and getting results. It also means that the information on the intranet is structured in a logical way that allows people to navigate to information quickly. Often, little thought has gone into the way information should be presented:

  • What information do the users (in this case all staff ranging from back office workers to those at the client interface) need access to?
    Analytics will show you what is being accessed the most. Well thought surveys can return valuable information. Even talking to staff members individually,or in groups, can add a lot of value.
  • How can the navigation structure be set up so that it is intuitive?
    Use the feedback you got. Perform a card sort to help build up a understanding of how the staff want information grouped. Put together a “mock navigation”,using a suitable tool such as Optimal’s Treejack, and see how easy it is for user’s to find what they are looking for.
  • What other ways are there that the information can be accessed quickly? Short-cuts, quick links, FAQs.
    Create a screen mock-up, and test how easy it is for staff to find the information. Use a tool that allows this to be simulated on-line, and set up real-life scenarios involving staff members with different functions to determine whether improvements can be made.
  • Pay attention to the questions that are often asked by staff.
    These will usually turn up questions that get repeatedly asked. “How is xyz done?”, “Where do I find information on our widgets?”. These questions make up the basis for the FAQs or a wiki.

 

History of Search … the infographic

In connection with the “History of Search” theme, below is an interesting infographic…

Internet Search Engines: History & List of Search Engines..

Infographic byWordStream Internet Marketing

 

A quote from 1958

Technology, so adept in solving problems of man and his environment, must be directed to solving a gargantuan problem of its own creation. A mass of technical information has been accumulated and at a that has far outstripped means for making it available to those working in science and engineering.

FACETS OF THE TECHNICAL INFORMATION PROBLEM
Charles P. Bourne & Douglas C. Engelbart, 
SRI Journal, Vol.2, No. 1, 1958

 

A very brief history of search

bkshop

Intranet Focus provides information management and intranet management consulting services. They also regularly publish a Research Note packed with great stuff. In the November issue, there is an interesting piece on the history of Search. Martin White, the Managing Director, has granted me permission to publish it here (see below).

By the way – Martin has recently published a book on Enterprise Search. You can find it at the O’Reilly site. (http://shop.oreilly.com/product/0636920025689.do). It’s certainly on my Christmas list this year.

A very brief history of search

Search came into prominence with the advent of the web search services in the 1990s, notably Alta Vista, Google, Microsoft and Yahoo. However the history of search technology goes back much further than this. Arguably the story starts with Douglas Engelbart, a remarkable electrical engineer whose main claim to fame is that he invented the mouse that is now a standard control device for personal computers. In 1959 Engelbart started up the Augmented Human Intellect program at the Stanford Research Institute in Menlo Park, California. One of his research students was Charles Bourne, who worked on whether it would be possible to transform the batch search retrieval technology developed in the 1950s into a service based on a large mainframe computer which users could connect to over a network.

By 1963 SRI was able to demonstrate the first ‘online’ information retrieval service using a cathode ray tube (CRT) device to interact with the computer. It is worth remembering that the computers being used for this service had 64K of core memory. Even at this early stage of development the facility to cope with spelling variants was implemented in the software.  Other pioneers included System Development Corporation, Massachusetts Institute of Technology and Lockheed. The main focus of these online systems was to provide researchers with access to large files of abstracts of scientific literature to support research into space technology and other large scale scientific and engineering projects.

These services were only able to search short text documents, such as abstracts of scientific papers. In the late 1960s two new areas of opportunity arose which prompted work into how to search the full text of documents. One was to support the work of lawyers who needed to search through case reports to find precedents. The second was also connected to the legal profession, and arose from the US Department of Justice deciding to break up what it regarded as monopolies in the computer industry (targeting IBM) and later the telecommunications industry, where AT&T was the target. These actions led IBM in particular to make a massive investment into full-text search which by 1969 led to the development of STAIRS (Storage and Information Retrieval System) which was subsequently released in 1973 as a commercial IBM application. This was the first enterprise search application and remained in the IBM product catalogue until the mid-1990s.

One of the core approaches to information retrieval is the use of the vector space model for computing relevance developed by Professor Gerald Salton of Cornell University over a period of two decades starting in 1963.  The vector space model procedure uses a cosine vector coefficient to compare the similarity of the content of the document to the query terms. This is the basis for most of the enterprise search applications with the notable exceptions of Recommind (which uses Probabilistic Latent Semantic Indexing) and Autonomy.

In 1984 Dr. Michael Porter, at the University of Cambridge, wrote Muscat for the Cambridge University MUSeum CATaloguing project. Over the ensuing decade this software was arguably the first to use probability theory in natural language querying, focusing on the relative value of a word – either in the search expression, or in the document being indexed. Identifying links and correlations between significant words that co-exist together across the whole document collection creates a probabilistic model of concepts. Using a probabilistic approach to determining relevance dates back to research undertaken at the RAND Corporation in the late 1950s and by the late 1980s there was a substantial amount of research into the use of Bayesian probability models for information retrieval.

The history of Autonomy dates back to the formation in 1991 of Cambridge Neurodynamics by Dr. Mike Lynch. Cambridge Neurodynamics used neutral network and pattern recognition approaches to fingerprint recognition. In 1996 Dr. Lynch founded Autonomy together with Richard Gaunt with $15 million in funding from investors including Apax Venture Capital, Durlacher and the English National Investment Company (ENIC). The novel step was not just the use of Bayesian statistics but the combination of these statistical approaches with non-linear adaptive signal processing (used by Cambridge Neurodynamics for analysing fingerprint images) of text.  For that time the level of investment in a company with no commerical track record was quite remarkable. In 1998 the company was floated on EASDAQ which capitalised the company at around $150 million, and its shares rose quickly from $15 in October 1999 to $120 in March 2000. This valued the company at over $5 billion.

The company was floated on the London Stock Exchange in 2000, and became the only publicly-quoted search company in the world. This was important for procurements in both the corporate and public sector given that all other search companies remain privately held and do not disclose earnings and profits other than under a non-disclosure agreement with a prospective customer.

Latent Semantic Indexing dates from the late 1980s and Probabilistic Latent Semantic Indexing from the late 1990s and among other features provide solutions to the issues raised by different words having the same meaning and the same word having different meanings.

A big thanks to Martin for this information., and also for bringing to my attention the names of Gerald Salton, and Douglas Englebart. I recommend that you click on the below links and read more about the fascinating work that these two have done.

I also highly recommend that you checkout  Intranet Focus’s site, and read some of the great stuff there. 

Recommended Reading

I want Google Search (again)

markjowen:

I have come across this sentiment often (that is – users want “Google Search”. (See my earlier post “We want Google“)

TSG’s blog post really captures some great ways of handling this…

 

Originally posted on TSG Blog:

Often times Documentum users, frustrated with Webtop Search or Advanced Search will request “Can we just have a Google Search?”.   This post will provide input to Documentum developers on how to best address this ongoing request.  While this post is specifically focused on Documentum developers, lessons learned about interface design apply to our Alfresco and SharePoint readers as well. 

View original 729 more words

Beta Testing SLIKK – My feedback

In my earlier post “Beta testing SLIKK” I described how I requested an invitation to Beta test SLIKK – a site that was offering a new way of searching.

Well after about a week, I got my invitation, and sat down to give SLIKK a test drive.

Here are my findings…

SLIKK Search Application

The SLIKK Search application is a Search Interface that aims to provide the “new” way of searching.

SLIKK Features

On the surface, SLIKK looks like a great tool. Its features include:

SEARCH ENGINE

SLIKK can be configured to return search results from either Google, Yahoo/Bing, or SLIKK’s own results. Google Results are selected by default.

CONTENT TYPE

SLIKK provides search results based on source material:

  • Web
  • Images
  • News
  • Video
  • Blogs
  • Twitter

MULTI-VIEW

With Multi-view, a spilt screen can be displayed to show you two different groups of results. (For example – “Web” search results on the left, and “Video” search results on the right.)

OPEN SEARCH RESULTS

SLIKK offers the ability to open the source page, that the search result is pointing to, in a small “child” window. This is not a preview, but the actual page. Further to that you can open multiple ” source pages” and have these open either in a series of tabs or “tiled. Then you have the choice of changing it to full screen, etc.

MY LINKS

You can select from a selection of sites (Google Maps, Twitter, etc) or you can enter your own, so that these appear in the top of the SLIKK page.

What I thought of SLIKK

At first glance SLIKK appears to be a great application.

However, when I looked closer at each feature, I started to think “ok…but what is the real advantage that is offered here?”

Search Engine – You can select the search engine that you want the search results from. Really – I can easily do the same by going to the Google site and executing a search there, or going to the Bing search and executing the search there.

Content Type – This is nothing that the “legacy” search engines didn’t already offer. However – to be able to get Twitter results was definitely something I was happy with.

Multi-View – Initially I thought that this was pretty cool. But , to be honest, there wasn’t really that much advantage to this feature. The only value I saw was if you wanted to see, side-by-side, search results for something while viewing what was being tweeted about it the same time. But then…how often do you want to do that?

Open Search Results – Note – this is not a “preview” feature similar to what Google offers. It is a “child window” with the source site in it. In these times of tabbed browsers, I wa struggling to find a real advantage to this.

My Links – When I first clicked on this (and saw the screen displayed above), I thought that it would offer real value. But all it does is display the name of the site in the top of the screen which, when clicked on, will open the site in a new tab, or window. In short – bookmarks/favorites.

Overall…
  I found that SLIKK was not actually that Slick. I certainly applaud the owners of SLIKK for what they are doing, but I feel that the big Search Engines are already able to offer so much more.

Beta Community

SLIKK have a Beta program in place. And there is a forum, and a blog (as well as a Facebook page etc). They do seem quite receptive to input from users and appear to be trying hard to create something that people want.

I wish them the best of luck.

Beta Testing SLIKK

While doing some research to help someone I’m “mentoring” (as part of the AIIM “Enthusiasts Club” I came across the SLIKK search engine.

This appears to use search results from Google but offers a number of useful ways to view them, as well as the website, or source that they are pointing at.

The site is still in Beta Testing, and is “by invitation only” so I’ll see what happens. If it all goes good, then I’ll keep you up-to-date.

Better Knowledge-Sharing: Fill The Dry Knowledge Well With These Practices

The post below was written by Sebastian Francis in 2010 for OilandGasInvestor.com.
At that time Sebastian worked for SAIC.

It’s an excellent article that describes describes some major concerns with knowledge management (including the capturing of tacit knowledge) along with making that knowledge retrievable and useful.
I’m grateful that Sebastian has given me permission to reproduce it here.

————————————————————————–

Better Knowledge-Sharing: Fill The Dry Knowledge Well With These Practices

- a Guest Post by Sebastian Francis

Here are a variety of quick and easy ways to share important business information each generation needs to know.

Today, a popular need of organizational leaders is how to quickly identify, capture and reuse information from employees who are retiring, or about to do so, for these people have industry expertise and can make it quickly available to those who need it.

The ability to quickly access the right information can improve competitive position, promote innovation, reduce rework and errors, and increase the speed to identify new opportunities.

Unfortunately, searching for information (such as proven practices, lessons from prior unsuccessful attempts, tips and techniques, documented procedures and, most important, experience and intuitive expertise locked in the heads of individuals) can take far too much time.

As the crew shift change continues—Baby Boomers retire in mass and few Generation X and Y talent enter the oil and gas industry—leaders have an opportunity to manage this shift by leveraging the latest information-sharing technologies and methods. To meet business strategy, many leaders crave the ability to “google like Google.” They desire to create a deep reservoir of information that replenishes itself and deploy methods and tools that will enable each generation to find the right information within a few clicks.

Too often, organizations rely on only one method, such as launching communities of practice, conducting after-action reviews, and promoting the use of best practices. Or, they use a limited number of technologies such as social networking tools, content repositories and search engines, and use the same solution across the board. This tends to produce dry knowledge wells.

A savvy strategy begins with understanding the needs of the internal talent group: Who has the information and who needs it? Next is meeting unique requirements by implementing several methods and technologies to create custom solutions. Characteristics of the solution should emulate popular knowledge-sharing practices that occur outside the organization.

Understanding generational issues

“What we’ve got here is a failure to communicate.”
–Strother Martin, Cool Hand Luke, 1967 film starring Paul Newman

Communication problems are as old as human history. Bridging gaps is a continual challenge, and industry leaders need to know how to capitalize on overcoming those gaps.

Within the oil and gas industry (as well as in other industries), there are four generations of talent: Traditionalists (birth years 1925-1945), Baby Boomers (1946-1965), Generation X (1966-1980) and Generation Y (1981-2000). Since the 1990s, professional journals have alerted oil and gas leaders that the Baby Boomers, now the largest percentage of the workforce, are exiting the workforce at an alarming rate. The potential consequences include: 
Increased competition for talent. Due to the decrease in skilled talent following the retirement of the Traditionalists and Baby Boomers, competition for workers with required professional degrees and experience will increase. 
Shifting geography. Technology enables talent to work from anywhere and teleworking is becoming more commonplace; therefore, organizations will be able to source talent globally. This shift will affect organizational communication, strategy and business processes.
Shifting generation. The corporate leaders of tomorrow will most likely be talent from Generations X and Y. Currently, organizations are balancing the activities of retiring two groups and preparing the organization for two others, while not neglecting any.
Aging workforce. A majority of Baby Boomers are predicted to exit the workforce by 2015 and are followed by a much smaller group of talent, Generation X. In addition, the next generations of talent have different learning styles, communication preferences and work/life balance requirements than their predecessors. To recruit, retain and develop the next generation of talent, organizations must recognize and adapt to these styles.
Lost information and tacit knowledge. As Traditionalists and Baby Boomers exit organizations, some for the last time, so will their communal know-how—their tacit knowledge—especially if it has not been adequately identified, captured, codified and stored in corporate knowledge repositories.
Preparing and training talent. The fact that Traditionalists and Baby Boomers are retiring does not mean that they will not re-enter the workforce in some capacity, such as starting a new career, or working as a consultant or part-time employee. In some cases, organizations will be able to leverage veteran expertise in this way. As a result, organizations will need to update the skills of these workers, or train them along with other new hires. Thus, learning/training departments may simultaneously have to train several generations, each having distinctly different learning styles. This can perplex learning organizations that do not understand the needs of each generation.

Bridging generational gaps to improve knowledge management

“When you’re 17 years old, green and inexperienced, you’re grateful for any guidance and direction you can get.”
–Christina Aguilera, pop singer

Leaders who recognize and respond to generational communication and learning commonalities and differences, can bridge gaps and prepare for the future.

Different generations favor different learning styles. Traditionalists and Baby Boomers usually prefer face-to-face, classroom and instructor-led training activities. In contrast, Generations X and Y may resist formal training sessions and prefer to connect to people informally and quickly search all information sources. Technology tools, such as handheld devices and social-networking sites, facilitate their fast connections to information.

Traditionalists and Baby Boomers tend to communicate using formal and personal methods, such as writing e-mails, meeting face to face and holding conference calls. In contrast, Generations X and Y usually like just the right amount of information, when and where they need it, such as sending abbreviated text and instant messages, and meeting via online chat sessions.

When information exchange is effective, employees seeking information receive what they need—a knowledge gem. Unfortunately, during communication, valuable information is often lost because the organization does not have an easy-to-use method of identifying and systematically collecting and depositing gained knowledge into a repository.

Capturing critical knowledge

“Any customer can have a car painted any color he wants, so long as it is black.”
–Henry Ford

Because the talent of today and tomorrow is multi-generational, a one-size-fits-all approach to information capture, collaboration and reuse does not work. What works are multiple approaches that consider each member of the audience.

Now that the typical characteristics of each generation group is understood, the next step is to understand the two phases of information flow: capturing it and accessing it for re-use. Let’s explore two key steps to capturing it.

– Step One: Understand and identify knowledge that fuels the organization.
What information, knowledge and expertise is valuable to the organization? Some businesses are uncertain and attempt to capture all information regardless of value. A better practice is to identify critical business processes and their associated performance targets across the organization’s value chain. In other words, identify the most important business activities that yield success, are vital to avoid failure, and identify where information gaps exist.
Analyzing key processes, creating knowledge maps and interviewing stakeholders will lead to key process identification. The output will assist leaders in understanding where the information is located, who has it and the prerequisites for information capture.

– Step Two: Capture what’s important.
Information and know-how are scattered throughout an organization in e-mails, individual and networked hard drives, binders containing operating procedures and training manuals, SharePoint or other Internet sites, conversations around water coolers, and within people’s heads.

Knowledgeable organizations use a variety of capture activities such as on-the-job team learning processes before, during and after major activities and are supplemented, when relevant, through a series of individual interviews.

“Learning before doing” is supported by a peer-assist process, which targets a specific challenge, imports knowledge from people outside the team, identifies possible approaches to address obstacles and new ideas, and promotes sharing of information and knowledge with talent through a facilitated meeting.

A U.S. Army technique called After Action Reviews involves talent in “learning while doing” by answering four questions immediately after completing each key activity: What was supposed to happen? What actually happened? Why is there a difference? What can we learn from it?

At the end of a given project or accomplishment, a process called a Retrospect encourages team members to look back at the project to discover what went well and why, with a view to helping a different team repeat their success and avoid any pitfalls.

A critical component of capture technique requires an effective method to record information that is comfortable for the information providers and appeals to the information seeker. For example, a Baby Boomer’s preferred sharing method could be a written report. In contrast, a Gen Y would have no interest in such and would ignore it.

This issue raises the importance of using a variety of communication methods as well as an opportunity to emulate information and knowledge-sharing practices that occur outside the organization.

Social-networking sites such as Facebook, Wiki sites such as Wikipedia, and video-sharing sites, such as YouTube, are popular tools for capturing information, connecting with people, sharing ideas, searching for information and viewing content. Such sites are popular, free and used by each generation. Instead of inventing something new, organizations can transfer popular features from public sites into the design and functionality of corporate tools. 

For example, attaching a webcam to a laptop or using a smartphone instantly equips anyone with just-in-time ability to capture information, especially dialogue and images that are challenging to document. A handheld production studio allows for ad hoc or planned capture of interviews with experts, after-action reviews, safety procedures, an equipment repair procedure, etc.

Uploading multimedia files (sound bites and video clips) to a knowledge repository creates a powerful capture and sharing opportunity. The “YouTube” approach makes it possible for any employee to post a video to a corporate site so that any team member can watch it instantly. “Nu-tube” is the name of a concept that a nuclear energy company gives its effort.

Launching pop technology is “hip” when end-users are engaged, needs are understood and the solution meets their requirements.

Make Information Accessible Quickly and Easily

“I try to learn from the past, but I plan for the future by focusing exclusively on the present. That’s where the fun is.” 
–Donald Trump

With a plan now for perpetually capturing valuable information, people must be enable to access and use it. Two steps help to achieve success here. The first is to leverage technology to visually present information. The second is to involve end-users in the design of the sharing process. The following case study describes these two steps in action.

Recently, a U.S. pipeline service company realized that critical information trickled throughout its business unit. The increasing inability of talent to readily tap information sources sharply diminished the value of stored resources. The service company encountered several challenges to make information available.

The overwhelming amount of information to capture, organize, store and manage caused employees to spend days (then) versus minutes (now) searching data.

A document repository contained unmanaged versions and uncontrolled copies of files scattered in network file shares, laptops, intranet sites, CDs, flash drives and filing cabinets.

Other challenges were increasing regulatory constraints, litigation and business-continuity issues, and the rising need to capture “know how” from retiring staff.

Considering generational changes as an opportunity to plan for the future and social-networking tools as an opportunity to innovate, leaders acted. The result is “e-discovery,” a solution that increases the speed to find reporting information from across disparate business units, regulatory-compliance improvements and business-performance enhancements.

Solution highlights include preserving content on an enterprise level versus only at an individual level, implementing a self-service information portal, facilitating contextual and “smart” search, and reducing administrative costs of managing paper records.

The method of designing this solution contributed to its success. The e-discovery design team:
– Identified the valuable information needed to comply with legal requirements,
– Understood learning, technology and communication preferences of each generation,
– Devised methods to allow users to share and access information in multiple formats and
–Designed a tool that emulates features of popular social-networking sites (easy, visual presentation of information, collaboration, smart search and dashboards).

The e-discovery impact on the pipeline business segment includes preparing litigation-status reports in one step versus multiple steps; retrieving archived documents in minutes versus days; eliminating risks associated by damage to paper-based files; reducing employee frustration of not finding who or what they need; and serving as a solution model for re-use within the enterprise.

And most importantly this method helped all generations of talent quickly find the right information when they needed it, so that they can perform their job.

“Diamonds are forever.”
–De Beers ad

Information can be a valuable organizational asset when people can quickly recall where it is stored.

Fortunately, organizations have an abundance of internal information sources: documents, expertise, lessons learned, best practices and the like. Unfortunately, waves of experts are leaving or retiring, usually without depositing their rich knowledge or revealing the location of information “gems” critical to performing business processes.

Leaders can respond by providing a variety of communication and learning methods, leveraging popular social-networking technologies, and embracing the uniqueness of each generation. The impact for the organization can be a rich field of valuable information that continuously replenishes itself.

Note – this article can also be downloaded here.

How should a “Perfect” Search project be run?

What follows is a post that I recently published on AIIM’s site as an “Expert Blogger”. (The original can be read here)

———————————————————————–

How should a “Perfect” Search project be run?

It was Friday evening, and Charlie was meeting his friends for a drink. They all worked in IT and had, between them, years of experience, especially in the area of enterprises and enterprise search, and liked to get together to catch up with what each was doing.

After a few pints and small talk, Charlie said “Guys, what do you all reckon would be the best way to construct a large-scale enterprise search project?”

Martin, who had had quite a lot of experience in this area, looked up and said “The main thing is that you shouldn’t underestimate what is required to get the best from a search investment.”

Charlie nodded in agreement. “But how can we help the client understand what sort of a commitment is needed?”

Ken suggested using an Agile/Scrum approach for the analysis of what the client needed as well as the development of the search UI.

“Hear hear” called out the others. Otis took the chance to follow that up with “you need someone who really understands what search is all about”. Martin glanced at him, and nodded. Otis carried on. “Someone who cares about search metrics, and knows what changes need to be made to improve them.”

Jan chimed in “I agree with you on some points. You‘ve got to make sure that you include all the stakeholders, and also educate the customer. Get everyone in the same room, and start with a big picture, narrowing it down to what is actually required. And, yes, create demo’s of the search system using “real data”. It helps the customer understand the solution better.” “However,” he continued. “I’m still careful about forcing a Scrum approach on a customer that might be unfamiliar with it.”

Stephanus put down his glass. “I’ve just finished a Phase I implementation at a client. The critical thing is to make sure you is that you set the client’s expectations and get buy-in from their technical people. Especially in security and surfacing. And I agree with Jan. There are still a lot of companies that don’t use Agile, or Scrum, at the moment.”

Sitting next to Stephanus was Helge. He began to speak. “There are a few important things. Make sure you’ve got Ambassadors – people who really care, and promote, the project. And ask the important question – ‘How can the search solution support the business so that they can become more competitive?’ It might be necessary to tackle this department by department. Get the business users and content owners together, but as Stephanus just said, don’t forget IT. And also make sure that the governance of the system is considered.

Stephanus smiled. “Yes – the workshop idea is a definite must.”

Gaston, who was sitting next to Charlie, said “An Agile approach has worked for me in the past. Creating prototypes is important. Most clients don’t know what they want until they see something tangible.” “Ok” said Charlie, “how has that worked?”

Gaston continued “Build a small team consisting of  a UI designer, a developer, a search engineer, someone from the IA team, and no more than two of the business users. Having someone there from QA is also handy. Start with a couple of couple of day long workshops to go over project objectives, scoping and requirements gathering. Use one week sprints, and then aim to produce workable prototypes. At the end of the week, schedule a time where the prototype can be demo’d. The point is to get feedback about what is working, and what the goal for the next sprint should be.

Mike, the last one in the group, looked around at everyone, and then back at Charlie, and said. “Charlie – there’s a lot of great advice here. One important thing to remember is that you have to work with the client to ensure that the search solution is part of the strategy. As the others have already mentioned, work with the client and educate them. Getting all the stakeholders together for some common education, collaboration and planning can really go a long ways towards getting the necessary buy-in and commitment needed for a successful project. It also is great for setting expectations and making sure everyone is on the same page.”

Charlie was impressed. He had some pretty smart friends. “Thanks guys. You’ve all had some excellent points. Let me buy you all another round”.

The above “conversation” was all based on a discussion in LinkedIn. (Click here to read it).
Many thanks to the contributors in that discussion who graciously allowed me to write this post:

Why giving the users what they want is not enough – the Importance of communication

What follows is a post that I published on AIIM’s site as an “Expert Blogger”. (The original can be read here)

———————————————————————–

Why giving the users what they want is not enough – the Importance of communication

As you are all most likely aware, giving the users what they want is not the right thing. Why? Because, often, the users don’t know really what they want.

Consider the following example:

A large restaurant chain has restaurants across the globe. Each restaurant needs to maintain documentation such as construction plans of each restaurant, recipes, procedures, and methodologies, etc. The “critical” documents are kept in a legacy ECM system and several SharePoint doclibs store the non-critical documents. These systems are located centrally, and are all globally accessible.

The business users work primarily with the legacy ECM system, but often also need to work with the documents in SharePoint. When a document was needed, a search was either done in SharePoint, or in the legacy system, using its rather complicated search feature.

Performing searches in two different places wasn’t easy, or efficient. And so, the users cried out “Give us a one central place where we can perform a search” When asked for more details they business users replied “Make it like Google”.

The restaurant’s IT-people (who might have been a little too enthusiastic) swung into action, without anymore questions. They found a tool that would allow SharePoint to “talk” with the legacy ECM system and crawl all the documents, indexing everything it could.

After working many weeks getting things set up, and configured, the IT-people sat and watched as SharePoint crawled through the content. Once finished, initial tests were done to ensure that a search action would actually return content. It was working perfectly. And it was “just like Google”.

A demonstration of the Search system was given to the users, who were ecstatic. They were able to easily enter search terms, and get results from the SharePoint, doclibs as well as the legacy system’s repositories. It was fantastic. It was easy to use, and there was no extensive training required. There was much cheering and showering the IT-people with small gifts. After further testing, the search facility was officially moved into production.

For the first couple of month the users were keen to use the “enterprise search facility”. But then, gradually, complaints started being heard. “The search results contained too many hits”, “Why wasn’t it more like the search feature in the legacy system?”, or “the search results were just showing the title of the document.” Users went back to using the legacy system’s search feature for the “important” documents, and the SharePoint search was just used for the documents in the document libraries. Namely, the “central” search facility was a failure.

What had gone wrong here? The business users wanted a single search facility, and they wanted it “like Google”. And that’s what the IT department had delivered – there was a single box where users could type in words they wanted find. And the search would return documents from all the different document repositories.

In this case, however, the users didn’t really know what they wanted. Yes, they wanted “easy”, but they also wanted something that allowed granular searches to be done (just like their “old” search tool). They also wanted to know where the search results came from. And they wanted the “important” documents to appear at the top of the search results.

The IT team should have asked more, and then they should have listened more. And then they should have repeated this process. Until it was understood what the Business really needed.  The team had followed a Waterfall approach, where requirements were asked up front, and then were not allowed to change. Agile programming techniques could have been used where a “finished’ product is shown to the users several times during the project. The users could give feedback which would lead to a better understanding of what they want, as well as the ability to refine the solution.

Fortunately, the IT team had the opportunity to improve the search system. They did add a small button to the search result screen, where users could provide immediate feedback. Working with this, as well as sending out regular “satisfaction” questionnaires, the IT team was able to identify areas of improvement. These include not only changes that were required on the user interface, and results screen, but it also allowed the IT team to see where further refinements were needed in the indexing process. Every four months, the improvements were presented to the business, and then implemented.

Now, the business users don’t use anything else.


Is True Enterprise Search actually possible?

What follows is the first post that I published on AIIM’s site as an “Expert Blogger”. (The original can be read here)

———————————————————————–

The idea of “Enterprise Search” is an attractive one. It certainly would be its weight in gold to have a single search location where key words can be entered, and within a fraction of a  nanocentury[1], results would be displayed that include both structured, and unstructured, content from across the numerous repositories, silos, systems, archives, file shares, cabinets, clouds, etc, etc.

But is true Enterprise Search really possible? I know there are several tools that provide “Enterprise Search” functionality, but these usually allow you to search over a fixed number of different repositories, usually containing similar data. Maybe it’s a set of defined documents, or a database, or similar. You certainly get the opportunity to make available content from disparate sources, but can you consider that “enterprise”.

If you consider what’s involved to search across the “Enterprise”, it should be quite easy, right?

Well…consider this:

1. First off, you need to be able to identify where your structured, and unstructured, data and content is. Remember, here we are dealing with the complete enterprise, so don’t forget that this includes files shares, hard drives, database system, ERP systems, ECM systems, etc, etc. And what happens if new “sources” are added?

2. Next, you need to know what sort of content you have. Can the Enterprise Search application “read”, or parse, the data/content you have? There certainly are ways to make it possible to do this. You can install an ifilter, for example. But, you’ll need one for every format that you have in your enterprise.

3. You need a way that your Search application can connect to all of the different “sources”. In principle this is, again, possible. (However, I would imagine that this would require a lot of configuration).

4. How frequently is your data, and content, changing? For example, in an ECM system, is the content constantly being changed (as new documents are added). Maybe several major and minor versions are kept of each document. Do you need to index all versions, or only the latest? What about data in your ERP system? How accurate do you want your search results to be? Do you just keep continuously indexing?

5. Security. Do you want users to be able to see results of data, or content, that, if they had used the native application, they do not have rights to? If there are disparate security systems in place, how do you translate ACLs from them into a common format? Do you use “early binding”, or “late-binding”?  

As you can see, it’s not that simple.

Until we have a way to be able to “capture” all information from an undefined number of sources, with an undefined number of data, and file, formats, with disparate sets of ACLs, I return to my opening question: “Is True Enterprise Search actually possible?”

What are your thoughts on this?


[1] A nanocentury is approximately 3.155 seconds

A couple of reasons for me to travel to Switzerland – ARMA & Chris Walker

On Monday morning, I’m heading to Switzerland.

The Swiss Chapter of ARMA, is having their inaugural meeting, in Basel, and a fellow tweeter of mine, Christian Walker, will be giving the key note speech there.

Because I’m “in the neighborhood”, (sort of), he suggested I come along to it.

I’m really excited…for two reasons.

This is going to be the ever first meeting of the ARMA’s Swiss Chapter!
I’m really pleased that I have the opportunity to be present during this.

The second reason is that I really, really enjoy the chance to meet some of the really smart ECM people that I tweet with. Christian is a senior consultant at Oracle in Edmonton, as well as an “expert blogger” for AIIM. I’ve be connected to Christian for over a year now, and have been involved in many Twitter discussions with him (and others) on subjects ranging from ECM through to “toilet paper” (long story…maybe I’ll cover this in a separate blog).   Needless to say – I am looking forward to meeting him in person.

(I had an excellent opportunity recently to meet up with another fellow Tweeter, Laurence Hart, in Paris, a few months ago (I was invited as a guest blogger to Nuxeoworld, where Laurence was giving the keynote speech.)
Because of various circumstances, I couldn’t make it – something I still regret).

Needless to say – I’ll be taking notes during the sessions (especially the keynote speech), and plan to write a blog post once I get back.

Related Links

Search terms that have highlighted my site

birdInteresting…

WordPress keeps a log of the search terms that have been used in various Search Engines (Yahoo, Bing, Google, etc).

Every day I check which searched terms have been used, and normally find it quite interesting. It’s always fun to then try and match up the searches with the posts that were read.

As well as “per day”, it’s also possible to see the search terms used over a week, a month, a quarter, a year, or for all time.

I checked the search terms used over the last quarter…

The top search term used that resulted in my blog coming up in the search result was … “frustrated face“!

I was very happy to know that my blog, which covers such things as “document management”, “compliance”, “psychology”, “UX”, “innovation”, etc, was showing up as a hit when someone searched for “frustrated face”. And this was the top search term!

Luckily, I checked the “all time” search term statistics. The top two were “technology acceptance model” (which leads to these posts which I am very proud of), and “innovation and technology” (which leads to some of the other posts I am also very proud of).

Thank goodness for the “ten thousand foot view”.

By the way – at least a search for “frustrated face” still lists “technical posts”.

Delicious’ tasteful reaction to negative feedback


I recently wrote a post about the changes that the social bookmarking site, Delicious, had made.

Since then I read other reports that the Delicious team have been working around the clock to fix things. There has been a lot of activity on their Facebook page as die-hard fans have been voicing their opinion over the changes.

I visited the site. There are a lot (and I mean “a lot”) of very unhappy people. The comments are almost 100% scathing of the changes that Delicious had made. I tried to read through all of the comments, but they just kept going, and going.

It seems that, in the hours after the “change”. a lot of functionality had been lost. This combined with the fact that no-one knew the changes were coming (Chad Hurley and Steve Chen hadn’t made any announcement), was what people were livid about.

I’m not certain, but it almost seems that the Delicious team were surprised at how many people were affected by this. Here’s a quote from a post  “All Things D” published on the 26th September:

Expectations aren’t terrifically high for the new Delicious, given the rareness of tech comeback stories and the fact that Delicious was never really that popular.

(Looking at the comments on that post, you can already see a hint of the fury that was coming).

One thing that did strike me was the reaction from the Delicious people (“delishites”). It was quite responsive. They seemed to swing into action, and, when there was a valid complaint, they responded. They also set up a blog  where they posted regular updates of their activities.

I was impressed. This seemed to be a company that was reacting to their users. And their users seem to be responding positively to this. There seem to be a change in the mood…

However…

Looking at Delicious’ Facebook page lately it seems that the positive “vibes” was short-lived. It seemed that, as more people discovered the “disaster” of changes that were made, that they have gravitated to the Delicious’ Facebook page to spray their fury on Delicious’ wall.

In any case…

In any case, it seems that Delicious are working hard to “fix” their product. In fact, when I logged in today I noticed that the new Delicious was starting to look more, and more like the old Delicious.

I’m curious how this will continue…

Note: here is one of the latest post on the “All Things D” post regarding the Delicious Redesign.

Delicious? The new flavo(u)r is an aquired taste.

The social bookmarking site Delicious has just had a make-over.

And I’m still getting used to it.

2008

The last time Delicious had a make over was back in 2008 (when it went from being Del.icio.us to being Delicious. Click on this link to see a video that was released at the time to “show” the differences. This post by Demetrius gives a side-by-side comparison.

Not everyone liked the new design. Nathan Bowers posted a long listed of “issues” he saw with the 2008 redesign (along with a marked-up screen shot). See his interesting post here. A lot of the criticisms he made, had to do with white space.

2011

Fast forward to present day & Delicious has been bought by AVOS (former founders of YouTube, and after several months, the social bookmarking site has a new face & is back in beta.

What this effectively means is that the changes to Delicious ain’t over yet. In fact, it seems that Chad Hurley and Steve Chen are open to suggestions about the site. In fact, in a Delicious blog post they use a Marty McFly (Back to the Future) quote to describe how they felt:

“What if they say I’m no good? What if they say, ‘Get outta here, kid, you got no future?’ “

And what are the changes?

They have introduce “stacks“. These are groups of related bookmarks links, that users can create and make available for other users. Stacks are demonstrated in a You Tube video that was posted at the same time,

The whole site has been redesigned. There is also a lot of white space – the links are now further apart from each other. And…they have stripped out a lot of other features.

Click here to see the “what’s new” list

And the public reaction?

Well – if you read the comments under the on the You Tube site, you’ll notice that there are a lot of unhappy people, who do not like the new design. (You’ll also notice that, in comparison to the number of views, there really are not a lot of comments).

I’m sure these comments are valid. For many, many years, Delicious has been more of a “personal on-line list of bookmarks links which users have been able to tag”. It wasn’t particularly “fluffy” (in a Web2.0 way), but it was functional.

Actually the best analogy I could make is that it was like WordPerfect 5.1 (a DOS based word processor that used to rule!). Not easy to use straight away, but once you got use to it, it was a great tool!. WordPerfect lost ground to MS Word which was GUI based (therefore prettier) and easier to use.

And Delicious was like this. Not the most elegant of social bookmarking tools (like a standard mobile phone in a Smart Phone  world), but for the hard-core users, it was exactly what you needed.

Now Churley & Chen have added some of the Web 2.0 fluffiness, Delicious is a different tool. Not everyone is going to like it.

Do I like it?

No.

For many, many years I also used Delicious for capturing useful links for future reference. Then I discovered Diigo, which is now my favourite tool for capturing my favourites.

But…the one thing I really did like about Delicious was it’s “Recent” bookmarks page. This listed all the links that people around the world were saving. It gave me an interesting insight into what was interesting people, and I would often make a game of trying to look for patterns.

But that wasn’t the main reason I would view the “Recent” page. I was always looking for interesting links that I could share with others in the twittersphere. By frequently refreshing the “Recent” list I was getting some good stuff. (You could also say I was using “crowdsourcing” to tell me what I should be reading). And this is something that I can’t do easily with Diigo.

The new Delicious still has a “Recent” page, but it is now split into two tabs. One shows all the new “Stacks” that people have created, and the other shows the list of recent links.  It’s bad enough that every time I do a page refresh the “Recent Stacks” tab is displayed, but when I do click on the recent links tab, there is now so much white space (and tags, and descriptions) that I only see the top two “recent” links. I have to scroll down the page to see anything else.

I know that his is not a big deal, but it does mean that Delicious is no different than other of the other social bookmarking sites. And, therefore, there is no real reason for me to type “www.delicious.com” into my browser address bar any more.

But wait, there is hope…

As I mentioned above, Delicious is beta once again. So, maybe I can suggest a few tweaks.

Let the user decide:

  • which tabs they want to see by default
  • the amount of detail they want to see (links only; link + tags; link + tags + description; etc)

Steve, Chad… are you listening?…

Note - On the AVOS blog site, there is a request for feedback with only an option to send an e-mail. Seems a strange (and old fashioned) way to ask for feedback (especially in a Web 2.0 world), but I guess they didn’t want lists and lists of flaming criticisms (such as on the You Tube site).