Can you trust reputation measurement tools?

Today, we seem to love Reputation trackers and any other similar reporting tools – some are strictly related to reputation, others to trust in brands, others to influencer or blogger rankings etc.

The first question you need to ask – before you start celebrating or commiserating, depending where you or organisation is on that tracker scale – is what research method has been used or what is the data that underpinned that ranking?

Prof Anne Gregory, Dr Kevin Ruck, Dr Heather Yaxley and many other academics have often argued the importance of research and data in any reports or strategies we, as PR practitioners, choose to present to our clients, publics or media.

If the data we use is not robust enough or the research methodologies deployed do not stand scrutiny or third-party verification/audits, then the nicely crafted pictures of reputation, trust etc. will come crumbling down.

This introduction brings me to essence of this post – can we trust a reputation barometer/tracker/tool which does not provide us with the research methodology used? One that doesn’t use a qualified researcher/statistician and provides sweeping remarks for a very small segment of the service/brand users?

If, like me, you’re using or will use such ‘ratings’, before you recommend your Client or Employer which one to choose, or before you decide on the one you may choose to use yourself, read this blog. I am highlighting below three key stages – and their subsequent questions – you may wish to consider before signing on the dotted line:

1: How is the respondents’ selection process carried out? What qualifies as a “rating”? How is the XX figure a representative sample for a population of 64+ million in the U.K. alone, and how will the stats change for international operations? What are the sociological analysis methodologies you use to ensure the sample is representative for the overall population/total consumers or service users? What are the surveying techniques used and what is the error margin? Is the research methodology validated by statisticians/researchers?

2: Be particularly careful when the said Reputation report/tracker/barometer or whatever title it may have, includes variable answers such as ‘somewhat’, ‘unlikely’ or ‘very familiar’ with regard to the respondents’ knowledge of the brand. How do you qualify either and what happens with those who are “familiar”, for instance? What are the clear definitions of all these variables and have they been explained to the respondents? What formula was used in quantifying them as an indicator in the final report?

3: My CIPR Council fellow Laura Sutherland published a very interesting article for Influence recently, one where she argued for the need of us having access to a company’s financial data so that we can track the impact our activities may have made onto the bottom line. Similarly, but for a different reason, we need to have access to the client’s business data to understand how the reputation loss or improvement may have impacted them overall.

So, when your Client is in a ‘pool’ with others – for third party publicly available reports – or when you commission a bespoke report, ask what data do the companies analysed provided the report author with. Did they have access to their financial results/turnover to be able to match a reputation point increase with an increase in sales etc. (or the other way round)?

My recommendation would be to understand and question the robustness of the reports you are being provided with before you present it in a nicely designed report to your Client.

There are practitioners who wrongly assume that we have no business questioning the facts, data or other material evidence we are presented with before we pass that on to our clients – we do now more than ever before.

The ‘Bell Pottinger’ mentality of whitewashing, painting a pretty picture and just throwing nonsense on the Client’s table is what makes us all still struggle for the recognition of PR as a valid, strong, ethics and value driven profession.

Your Clients trust you – it’s as simple as that. They trust you to tell them the truth, to tell them what they need to change and how they need to do it. If you don’t know how the data used for their reputation/trust reports has been collected and analysed, how can they trust you?

If the media questions them about the content of one of those reports, especially if they use them in corporate brochures or annual reports, and the media asks them to clarify the various indicators used, they will turn to you for clarification, won’t they? If you can’t answer those questions, what do you think will happen?

Picture credit: Fleur Treurniet

Ella Minty

Founding Chartered PR Practitioner, CIPR and IoD mentor, published author and university lecturer, Ella has almost 20 years of high level government and international organisations experience in corporate reputation, leadership and crisis management, across business disciplines and governments, including investment markets, lender organisations, national and international media, NGOs and affected communities. She is a 2014 Service Award Winner of the Society of Petroleum Engineers, Assessor of CIPR’s Chartered Practitioner Scheme, and an elected member of the CIPR Council (2017-2019). She handled some of the most prominent international crisis of recent times, she developed the Leadership Development Programme for MENA young engineers and she has also been an adviser to several governments on their national branding strategies. Her list of clients includes McKinsey & Company, Boston Consulting Group, Total, British Petroleum, Shell, Centrica, Averda, US Energy Freedom, Private Investment Development Group, the European Commission, the House of Commons, the Renaissance Foundation and many others. She is also a member of McKinsey’s Executive Panel.

Posted in Editor's Picks, Public Relations Tagged with: , ,

Leave a Reply

Your email address will not be published. Required fields are marked *

*