Executive Reputation Case Study

6 min read

Executive Reputation Case Study

Updated on 09/16/17 3:16 PM PST by Reputation X

Over the years we've managed the online reputations of a lot of executives. Industries have included clothing, manufacturing, hospitality, biotechnology, government (foreign and domestic) and many others. But an especially large portion have been executives in financial service related industries. This case study focuses on one of those whose problems included the New York Times, Bloomberg and Ripoff Report. Names and circumstances have been adjusted to some degree to protect the identity of our client.

Note: The illustrative examples provided in this article are of similar people and entities in order to protect the identity of our client. 


Industry: Financial Services

Entity: Individual with overlap onto corporate online brand

Issue: NYT, Bloomberg, Ripoff Report

Technique: Mixed. Dilution, Suppression, Protection

Duration: 8 Months (changes after 90 days)


Our client is a well-known member of the New York financial community who was planning to run for political office in the future. Because of the clients notoriety he was sought after by journalists and bloggers and sometimes hunted by activist groups. He had been tangentially involved in the operations of the financial services company in which he worked and was caught in the "blast radius" of bad company press. The result was investigative journalism that focused on New York politicians and our client that painted him in a harsh light. To make matters worse, the attention by major news outlets was exacerbated by bloggers and others seeking to worsten his online reputation by posting opinion on negative review sites.

Download our capabilities deck


Reputation X was retained to clean up search results for our client. The first page of search results contained a New York Times article, Ripoff Report, and Bloomberg article; all of which were negative. Our job was to remove as much as possible, to provide a counter-narrative, to dilute with honest relevant stories, and to push down (suppress) any remaining search results so he could move on with his life.


Our first step was to research our client and similar people. We developed a persona for him that included an example of "ideal" search results. We asked what a person like him should have for online representation, then compared results with his then current search profile. 

We then looked at his problem pages like the New York Times, Ripoff Report and Bloomberg. None of these sites can normally be removed at the source (many publications can). Legal options were considered but were believed to be expensive and with a low probability of success. Because the pages could not be removed at the source through relationships, negotiation or legal pressure, we looked at removal from search results.

In some cases search engines will remove search results from results. But the content of the pages did not fit the search engines' criteria for removal. In the end if was discovered the pages could not be removed at the source, or from search results.

We then opted for a three-step campaign:

1. Dilution and Counter-Point

2. Suppression of Negatives

3. Protection from Future Problems


Having thoroughly researched our client and compared his search profile with those of similar people we discovered gaps in the kinds of online profiles people and search engines find relevant. This provided opportunities for content and technical development that could help our client. 

Knowledge Graph as the Starting Point

Example of a knowledge graph (not a client)

The Knowledge Graph above (not a client) has reverse-engineered data boxed in red.

We also found that our client, while well-known, did not benefit from a Knowledge Graph depiction in search results. The Knowledge Graph is a compilation of content placed directly in the upper right corner of search results. It displays relevant knowledge drawn from various websites. It occupies about 40% of the space at the top of normal search results. We knew that focusing on the knowledge graph would both provide dilutive content to search results, necessitate a Wikipedia page, and the content that supports a Wikipedia page. This "killing two birds" approach became the starting point for the reputation campaign. 

Reverse Engineering Results

In order to earn the display of a knowledge graph a series of events needed to happen. Knowledge Graph gets much of its information from Wikipedia and curated meta data like WikiData, Virtual International Authority File, WikiNews and others. We reverse-engineered the graphs of similar people, and drew up a content, publication and development plan based on the building blocks of the graph. 

Diagram of where knowledge graph data comes from


Engineered Notoriety

A person of stature can usually earn a Wikipedia page if she or he is well-known, has a degree of notoriety and whose accomplishments can be verified with references. Wikipedia pages without these things get deleted in short order. Our client was well known, but did not quite have the level of documented notoriety we felt was needed to earn a Wikipedia entry. So we created some.


Our relationships enabled us to locate the perfect Forbes author. But before the Forbes author could be approached our client needed something newsworthy to act as a launchpad for the eventual story. We worked with our client to create a story and an accompanying press release. We then launched it, garnered social media praise, and provided the press release as the kernel needed for a story we had constructed for the Forbes author. The author wrote the story according to Forbes own journalistic guidelines and it was published.

We then arranged the development of a few more third-party articles. Within a reasonably short period of time we had enough reference material to create the Wikipedia page. 


We worked with a Wikipedia author to develop a basic, honest and relevant profile using existing and recently created third-party content as reference material for the Wikipedia page. Wiki pages are publicly editable and must always be created with that, and the highest standards, in mind. All content must be verifiable and true. We succeeded in creating a Wikipedia page, Wiki Commons image content and more. 


With the publication of the Forbes article and others, the Wikipedia page, and online images, search results began to change. We then moved on to strengthening their LinkedIn, Facebook and Twitter pages. At this point the Knowledge Graph had not yet made its debut. We revisited Wikipedia, linked the new and existing web properties, press and social media, and then moved to create meta data.

Meta Data

Metadata is data that describes other data. It summarizes information about data and makes it easier for search engines to verify the right information to include in many places. We began to implement meta data into websites we created for our clients in order to get the information contained therein indexed by search engines more effectively. We also worked with websites seldom seen by people, but key to the operation of search engines that describe relationships between data. In essence, we used those sites to draw road maps between the Wikipedia page and other data relevant to our clients. Google followed the breadcrumbs we had laid out. Voila, the Knowledge Graph appeared!

Improvement: Four Months

Bloomberg, NYT and Ripoff Report had by this point all moved lower on the first page of search results. They had been replaced by the Forbes article, Wikipedia, corporate leadership page from the parent corporation, an image bar depicting images we had placed on various sites of our client, and more. But the negatives, while far less visible, still existed on the page. Ripoff Report was faltering between page one and two over time. By working to create the graph, we created an entire ecosystem of relevant content that was engineered to succeed. 

Second Phase

Social media profiles were still languishing on page two, so we redoubled our efforts to attract followers, content refresh and overall relevance. Over a period of a few months these sites gained ground, Facebook and Linkedin successfully appeared on the first page of search results and stayed. Ripoff Report had been banished to page three and continued to drop. The NYT article was relegated to the bottom of page one, and Bloomberg faded between page one and two. 


  • Ripoff Report Suppressed
  • Bloomberg Suppressed
  • NYT reduced visiblity by 90%
  • Knowledge Graph
  • Wikipedia Page
  • Client now directly controls 70% of first 20 results. 40% of the first page. 

Three extremely difficult sites to move were diluted, then pushed down. The highly visible knowledge graph was developed and now dominates search results. Search results affecting the parent corporation are completely clear, though some lingering issues with NYT exist.

Other people have added to and edited the Wikipedia page which is stable in the number two position just below the corporate leadership page. Our clients LinkedIn profile is below the Wikipedia page followed by new articles that have since replaced the original Forbes article. Negatives that were originally seen by 100% of searchers, are now seen by about 5%.

The client is very satisfied with results and understands that total elimination of sites like the New York Times would take another year of development. He is happy with the results. 

 reputation marketing for executives

Reputation X
Written by Reputation X

The Reputation X team is a collection of online reputation experts working in the areas of content planning, reputation strategy, search engine marketing, social media, technical public relations, and other more esoteric realms. We provide white-label reputation management, protect reputations and clean up search results for agencies, brands and people.

Post a Comment

Get a Free Analysis