Thoughts. News. Opinions.

News & Opinion Pieces

6 strategic considerations when planning your data brand

The amount of data will only increase. Thus, will the opportunity for utilization. The regulation is halting slowly behind. But whilst most organizations are dodging the data dilemma and reactively minimizing their data usage, only a few progressive organizations emerge and seize the prospect of communicating about data as a new differentiator in the marked through data branding. Here are 6 strategic considerations to keep in mind when planning your data brand.


1. The data brand has to begin by answering your big data-WHY?

High credibility, a strong brand and a good value proposition are crucial elements when a company requests access to certain personal data sets. Can the company tell me why they want to handle and excel my data, besides wanting to make a profit?

The company has to be able to elegantly and reciprocally incorporate the added values the user will get for sharing their data when answering the big why. And the company has to be able to explain the specific and defined purpose for collecting the data, and the time frame for keeping it, in a transparent, easy to understand, relevant and yet engaging way.

Example: Grundfos is a B2B corporation that has found their data-WHY. Grundfos was just a water pump manufacturer a few years back. Today, they are solving the world’s water- and energy problems. They do this through their use of IoT, which gives them dynamic information from pumps around the world and a close to real time relation with the end user.

Through data of people’s water usage, Grundfos can ensure the water supply in Africa, which then does not become a utility for corruption and power struggles. And it is probably easier to win the big contracts in Asia and get the best employees from America when you’re solving the world’s water problems, instead of just being a water pump manufacturer from Denmark. Data can be used for a greater purpose and for B2B companies to go from being a product brand to a solid data brand with new data services.


2. The data brand should build a value-based win-win data relationship with the customers

The customers will give their data to the companies they trust. Most Danish companies can’t offer the same amount of free services as Google or Facebook in return for data. The customers might even have to pay for the service and share their data at the same time. In order to make them do so, there has to be a greater purpose and some sort of extra value added.

The customers will be a central part of the company’s growth in the data driven world.

You can’t just keep valuable unstructured data from customers floating around in customer service. Data has to be moved into the organization’s core. But you have to provide added value in order for people to share more data.

Example: Amazon has become a major corporation due to the fact that they have built a strong data relationship with the customers. The customer has shared data and gotten something in return. A transaction of privacy and value, that makes sense for the user.


3. The data brand has to practice privacy-by-design to the fullest

GDPR introduces the data subject and states the consumers right to their own data and protection. This adds extra administrative work for organizations but is fundamentally in the best interest of all citizens. Almost all customers have less trust in companies that have broken the GDPR regulation, and around 50 percent wouldn’t buy their products.

So it is, of course, important to be in compliance with the regulation in a branding context. But it is really about going the extra mile and do more than what GDPR demands, which then can be used to elevate one’s position in the marked.

Example: Pension funds could benefit from building a transparent consent-based universe that provides an overview of the customer’s financial situation. It should not only be a GDPR-compliant universe, but a user-controlled data universe, that might even enable the customer to discover the financial benefits in moving some of their savings into an annuity account etc. An added value like this could be exploited to the fullest in the pension company’s data brand.

Source: Marketing Week


4. The data brand has to kill Big Brother

Surveillance and unscrupulous usage of data against the interest of the customer/citizen, and for the sole benefit of the state/company, is an extremely bad branding position. And there’s a growing number of stories of how daunting data misusage can be.

A strong data brand is not only communicating how data is collected and used, but why there has been a conscious choice not to use e.g. a facial recognition system – but SoMe-data instead. This is a part of creating a customer centered value position that in no way revolves around surveillance. Data branding in about progressively communicating choices and opt-outs.

Example: The “The Chinese case” is Big Brother gone wrong. It shows how a massive distribution of surveillance cameras with facial recognition systems makes it easy for the government to find those they consider criminal.


5. The data brand has to be completely ethical. Ethics-by-design

The law is one thing. Everyone has to follow the GDPR. Ethics are something much more than rules. It’s about all the extra things you do because we as a society find it right or wrong. And the thought of an ethically responsible company is not new. CSR is an old discipline.

But the data brand is to a greater extend built on what Michael Porter and Mark Kramer described as “creating shared value”. The data brand has to be soaked in ethics. And the CSR department could eventually be closed down since ethics has to be naturally built in everywhere.

But just like greenwashing, “data-ethic-washing” is also a thing. Where some companies make a great effort to actually minimize waste, CO2 emissions etc. in every aspect of operation, some greenwashed companies simply just promote a couple of green ideas that looks good on paper.

An actual data ethical company is able to brand themselves on their good initiatives and activities. Whereas the ethic-washed companies can be recognized by their complete lack of evidence of any real actions taken, whilst having some good sounding statements of principles. It is therefore important for companies and organizations to incorporate the ethics into the culture of the company, and aim towards having ethics-by-design, where ethics are integrated into every process and data relations.


6. The data brand has to relate to the MyData-movement

If we glance into the nearest future, we’ll quickly realize that a part of what will characterize the digital world will be the “MyData”-industry.

The strongest data brand will not be the ones who just asks the users to give “an informed consent” after haven read three long pages of privacy policy or just clicked OK to get the information they were really just searching for on the website. The strongest data brands will get a “trust-based consent” from the customer, who is of the core belief that the company has a wholesome and responsible way of processing the data and a greater motive to do so.

Example: Hub of All Things (HAT) is a great example of a service that allows the users to administrate all of their own data, and companies can get access to it through them. Platform companies create secure data libraries that collect the individual’s personal data from the health care system, social media, behavioral data from Fitbit watches and other sensors and more – in one secure “data bank”.

It is then up the individual user to share their data with those companies and authorities that in return give the users added value. And it is only going to be the strongest data brands that will receive access to the client’s data, when we move towards the MyData-movement’s Me2B-paradigm.

How to use data, AI and behavioral economics to fight insurance fraud

The volume of insurance fraud in Denmark is increasing. This is the conclusion in Insurance and Pension Denmark’s latest report about this delicate subject matter. The accumulated amount of fraud related to personal injuries (such as work injuries or early retirement), where the size of the claim easily can amount to a few millions (DKK), has increased by 13%. Generally, when it comes to for example loss of work ability or potential disability, we are entering an area where the data is more sensitive compared to a case that is ‘just’ about my stolen bike. This is where data ethics comes into the picture.

The Director of Claims at the Danish financial services group Alm. Brand also recently pointed out that the amount of fraud tends to increase in correlation with increasing digitization of the claims reporting process and as the personal contact decreases, correspondingly. So how can insurance companies fight fraud – both the harder kind where the particular fraudulent activity can add up to for millions and the softer kind where one tweaks the facts a bit during an online claim filing process?

Data Ethics and AI

In the report “Data Usage and Data Ethics”, which was formulated by myself and on behalf of Insurance and Pension Denmark, we take a closer look at the fight against fraud and how data are – and can be – applied in this process. It is in no way ethically justifiable to commit fraud or submit false data at the expense of others. The data ethical problem arises when honest customers are being wrongfully closely investigated. Some companies might even abstain from going too far when it comes to fighting fraud because bad stories, which are often the easiest to tell in the newspaper columns, can hurt the brand. But is it ethically justifiable to the honest clients who are paying the price? Is it more ethically justifiable to apply more or less data when it comes to fighting back against insurance fraud? It really gets tricky when add to the picture the fact that some customers actually want the insurance company to be somewhat thorough in the claims filing process so that potential fraudsters are caught. Should insurance companies really add useless questions to the filing process just to make customers feel that they’re doing enough?

This is where technology plays a crucial role because AI with many different datapoints can recognize complex patterns in claims and pinpoint those who commit fraud while diminishing the number of false positives fairly early on in the process. So, here we are dealing with a data ethics which is based on the application of MORE data, not less. And Artificial Intelligence can sort through namely unstructured data and use these data intelligently, so the honest clients can continue to file their claims undisturbed.

This is where technology plays a crucial role because AI with many different datapoints can recognize complex patterns in claims and pinpoint those who commit fraud while diminishing the number of false positives fairly early on in the process. So, here we are dealing with a data ethics which is based on the application of MORE data, not less. And Artificial Intelligence can sort through namely unstructured data and use these data intelligently, so the honest clients can continue to file their claims undisturbed.

Honestly – how does one appeal to people’s conscience in digital touchpoints? Let’s look at Behavioral Economics

Especially in regards to ‘soft fraud’, the field of Behavioral Economics can contribute with preventive tactics that motivate customers to stay honest throughout the claim filing process. The awareness of having a counter fraud database, where for example data about a particular claim is stored for several years, will most likely affect the rationale behind fraud. The prospect of being sentenced to a harsh judgment or penalty might have a similar effect. Knowledge about the counter fraud systems, rules and limits that the insurance companies use also has a certain effect and we do in fact see fraudsters move from insurance company to insurance company as the companies’ fraud detecting systems improve. Finally, there is the ethical awareness related to the claim filing process. Here the data ethical maxim is to create incentive for the customer to reflect ethically when he or she files a claim.

In a previous study, the number of kilometers specified by people that were in the process of drawing up car insurance was compared. The higher the number of kilometers, the higher the premium. Half of the people were to report the number of kilometers and sign a sworn declaration saying that the previously stated information was correct. The other half signed the sworn declaration before they filed the information about the number of kilometers. 13.488 insurance policies were filled and the results showed that people generally reported 10% more in the number of kilometers when they signed the declaration at the end of the procedure compared to those who signed the declaration in the beginning [1].

Literature on the subject of Behavioral Economics suggests other recommendations which appeal to ethical reflection when a claim is being filed:

  • Awarding good behavior
  • Humanizing the report formula for examplevia a personal note and signature from the insurance company’s Director of Claims somewhere in the digital claim reporting process
  • Transparency surrounding the direct and indirect costs related to fraud  on the firm or industry level is also a suggestion which can awaken the fraudster’s ethical consciousness regarding the consequences of his or her fraudulent activities
  • Information and education about the claim reporting process and for example the difference between subjective justice vs. objective justice
  • Stating the guidelines for reporting
  • Stating Terms & Conditions as well as codex  for rejecting claims
  • Create awareness about the individual’s affiliation to the insurance community
  • A stronger business- or industry image will presumably also diminish the incentive and make it harder for some to justify fraud. This is where we enter an area I like to call “data-branding”, where customers are not hesitant to share personal data with the company that they interact with because they have a trusting relationship based on value transactions, transparency and data ethics.

Ready for organizational change? Claims processing and counter-fraud will blend together

The filing and processing of claims will over time run much more automated with Straight through Processing and immediate pay-as-you-go. The fraud investigation on the other hand will blend with the processing of claims in regards to more usage of digitally already available data and well Thought Through Processes that promote honest claim reporting:

  1. Transversal integration of more data points with use of technology: Real-time data analysis and automated pattern recognition will be running in the background throughout the claims handling process.
  2. Preventative measures: Behavioral economics strategies and tactics are integrated with the claims handling process with a thought through service design, questionnaire techniques etc.

The questions, which the claims handler (human or machine) will ask during the filing of a claim, will be reduced to a total of two or three questions coupled with a long line of already available data as well as automatic image recognition. Any further questions and data collection will only happen in regards to those for whom there is a really good reason to further ask some clarifying questions.

How will the departments that are handling claims look then – in terms of organization? There will be a need for human competences in the design of the filing process, the questionnaire formulation techniques etc. in regards to prevention, and there will be a need for hands at work in the investigative part, where suspicious cases are being investigated. Much of the outlier detection will be handled by AI and transversal data integration. In short, humans capabilities will be needed in the design and setup of claims handling as well as the investigation of potential (many more detected) fraudsters, i.e. investigation of more positives and fewer false-positives.

[1] Shu L, Gino F, Bazerman M et al (2011)

Developing a Data Ethics Policy for the Insurance Industry in the New Data Economy

The Danish insurance industry faces huge opportunities and challenges going forward in the economy. This demands for a thorough data ethics policy where difficult dilemmas are considered and both lines and principles are drawn. In collaboration with the Danish Insurance Association, we at Nextwork have developed a comprehensive framework for establishing af data ethics position and provided recommendations for a data ethics policy for the industry. Drawing on expert knowledge and case examples from the insurance and InsurTech industry. Here’s a somewhat short explanation for the need to develop a data ethics policy and summary of the key data ethical recommendations. Click here to get the ‘short’ version in Danish.

Forces at play in the new data economy

The insurance industry is and has always been a data-based industry and is currently undergoping extensive digitalization. Across the value chain, data – and more critically personal data – is the core of everything. From targeting potential customers, underwriting and risk calculation, managing retirement savings through ongoing advisory services and risk management to claims handling and fraud detection. But what happens when the amount of personal data available grows expansively within a short period of time? What opportunities and challenges does this pose for the insurance industry?

This is increasingly a reality not only the insurance industry but the entire world, due to the explosion of especially personal data that has transpired in the past couple of years. This is partly due to the rise of IoT (Internet of Things), where different devices are connected to the internet with the purpose of collecting and integrating data from the real world into the digital world. Just to put it into perspective – an astounding 90% of the data that is available to us today has been created during the past two years (World Economis Forum, 2017: The Value of Data).

The dilemmas that arise with a growing amount of data

  • Opportunities: The emerging amount of data on houses, water pipe conditions, driving behavior, weather concotions and a vast array of personal data can to a great extend be used to ’perfect’ the market in a number of ways. It can provide insurers with additional information, which in turn will make pricing more exact, as the figure below illustrates. This will ultimately benefit a lot of customers, who are currently paying more than necessary for their insurance product. The insurance fraud investigators can also combine multiple data points with AI, to more precisely detect fraudulent activity and avoid bothering honest insureds with unnecessary suspicion. And through a more or less direct specific access to critical health data – abiding the rules of privacy by design – the life insurance company can make sure that people with critical illness get their entitled claims payment right away. This means that a critically ill person can avoid facing a wall of bureaucracy in an already tough situation where the resources may be few.
  • Challenges: While the possibilities are many but so are the potential downsides. Let’s for instance look at the question of the uninsurable. Those whom few insurers will be able to insure because of their (as the data shows) particularly high-risk profiles. How far can discrimination go in the name of guaranteeing fairness in pricing? We already discriminate today based on age, in so far as young people pay more for car insurance than older people. That is a sort of discrimination based on few data points and broad categorizations of risk groups. So which kind of price discrimination is the most ethically justifiable? And how much ‘surveillance’ should insurers impose on customers in the name of giving them the best possible service and providing relevant value when certain life events occur? What is ‘cool’ and what is ‘creepy’? What is ethically justifiable and what is not?


First consideration – what is your ethical starting point?

Throughout the history of ideas, two competing ethical viewpoints have been dominating:

  • Utilitarianism, which is an ethical approach based on the consequences of actions
  • Deontological ethics, which is an ethical approach that always puts the individual at the centre and pose certain unbreakable rules regarding the rights of the individual

 Second consideration – more or less data?

  • It is possible to obtain ethically desirable ends through data minimization, which seems to be the default approach in the current data ethics debate, e.g. “What limits shall we set to constrain ourselves”. A clear example of this is when data security is being achieved through minimization – in other words ‘security by obscurity’ – which is often seen in today’s society. This is immensely dominated by the “better safe than sorry” philosophy. But this mindset does not come without consequences. You end up completely losing the value that personal data possesses.
  • In the other end of the spectrum, you can obtain ethically desirable ends through data maximization, utilizing personal data (sometimes, if necessary, even without consent). This can for instance be data concerning one’s health condition, exercise level, driving behavior, the condition of the water pipes in the cellar etc. This approach has historically been dominated by the utilitarian “for the greater good” philosophy since an immense amount of data could help researchers develop more efficient and personalized medicine and insurance companies could help prevent accidents or perhaps intervein before a person breaks down from stress. It does however come at the risk of data being misused. Recent technological advancements within encryption and privacy by design does, however, allow for the combination of “for the greater good” while maintaining full disclosure regarding privacy and individual consent.

A framework for developing a data ethics position

Through the basis of utilitarian ethics vs. deontological ethics and data minimization vs. maximization, we at Nextwork have created a framework for developing a data ethics positioning. Three basic data ethical positions are presented:


The Critical Position: Every individual must own all of his or her own data and the companies have to be very careful with the way they collect and use the data. Individuals and companies who subscribe to this position would argue for a limited access to data and data should always be deleted after the purpose for which it was collected has be fulfilled for the sake of the individual’s right to privacy.

The Progressive Position: Individuals can through individual control over the data that he or she produces and owns maximize the value of that same data. In this position the individual is encouraged to take ownership over personal data and has the ability to attain value by putting the data into use. The companies will in turn encourage this process not just to be GDPR compliant, but for the sake of the customers best interest as well as the company’s own interests.

The Aggressive Position: A great society must use everyone’s data for the sake of the common good. Stakeholders within this position would want personal data to subsidize innovation and create value for the entire society. It is less important that individuals have ownership over their data as long as it benefits a greater good. The common good would often be managed by institutions such as a research unit but it can also be managed by a public regulator og a private company.

The general data ethical themes that the insurance industry faces

Depending on your data ethics position, the answers to the major data ethical themes in insurance varies.

Nextwork has recommended the following to the industry:

Data Security We advised the industry to always have data security at the center of their data ethics and to keep Privacy by Design as the default setting. A secure data infrastructure is a pivotal requirement when data is being shared between individuals, companies and the public sector. The industry’s approach to data security should therefore center around position 2 – the progressive position.

Data (self)control and enablement We advised the industry to fight for the individual’s right to keep ownership of their data. The individual being in control of his or her own data is a crucial element in order to secure the basic trust that is fundamental when data is being shared between different agencies. The industry should make sure that the customer’s data is as easily accessible to the customer as possible, and that individuals always have the opportunity to improve their possibilities for obtaining the more precise coverage. Getting a more precise risk calculation by using more data is ethically justifiable by the fairness and solidarity principles, but it requires self-control of the data and, consequently, also a fair amount of digital literacy amongst the customers. The industry should therefore operate within position 2 – the progressive position – when working with data-control.  

Personalization We advised the industry to continue to use data to differentiate between groups of risk profiles and to identify high-risk groups. This is necessary to maintain an equal playing field in regards to foreign competitors and to create the best incentive for customers to improve their risk profiles. Using even more data-points when conducting risk calculation could also help break certain segmentations seen today that are in a sense ‘unfair’. Young drivers would for example get the opportunity to affect the premium of their car insurance, which today is highly affected by their age. In order for certain high-risk groups not to become entirely uninsurable, there should be a limit to the personalization based classification. Especially when it comes to those data-points that cannot be improved by the individual. The data ethical approach to personalization should therefore be grounded in position 2. There will however be certain contexts in the future where the industry could lean toward position 3 (the aggressive position), when it comes to for instance geodata of houses because the knowledge about for instance houses and flodding conditions  – and of course the potential solutions – is highly beneficial both for the individual, society and the insurance company.

Behavioral Change and Incentivization We advised the industry to give customers the ability to use their data for preventive efforts and behavioral regulation based on incentivization, as long as the customer is able to make an informed decision and have an genuine alternative to sharing data. The industry should also have the opportunity to access and use vast amounts of data when it subsidizes the common wealth of society at large such as cutting down medical expenses. It is also ethical responsible for the industry to store and use the data that is necessary to improve fraud detection. We therefore advise the industry to use a mixture of position 2 and 3, when dealing with behavioral change and incentivization. Read this article for more information and recommendations on countering insurance fraud.

Transparency – We advised the industry to provide full transparency when asking for access to the individual’s data. This is a key principle! The industry has a moral obligation to do so – and it’s in ther industry’s best interest – to make it easy for the costumers to understand what they are consenting to when handing over data. This also requires an enhanced amount of transparency about the risk calculation process, where it is pivotal to inform the customer which data points are being used to estimate their risk profile. We advise the industry to use principles from the schools of legal by design and ethics by design. The industry should therefore have position 2 as the centre of their transparency policy. Transparency combined with data security and data-(self)control is far more interesting and data ethically sound than merely focusing on securing privacy.

Who is the loudest of them all?

Nextwork’s Head of Digital & Public Affairs Lasse Perrild explains to the Danish newspaper Dagbladet Information, why the Danish Immigration and Integration Minister Inger Støjberg is dominating the political debate on Facebook.

A digital study that Nextwork and Analyse & Tal have done for Dagbladet Information, surveying more than 16 million data points, have shown that the Danish Immigration and Integration Minister Inger Støjberg is far more superior than her colleagues when it comes to Facebook reach and interaction. On an average her post will get 6,376 interactions, which is four times as many as the politician with the second most interactions, Pia Olesen Dyhr from SF, with 1,580 interactions per post.

Lasse Perrild accredits Støjberg’s success on Facebook to her sharp eye for divisive content, and her ability to disrupt people and most importantly – get them to react. “Almost all of her posts are about immigrants, which is a subject that we know divides the Danish population. She therefore gets a lot of negative comments but they are actually helping her to spread her message, because Facebook’s algorithms thereby assess it to be a meaningful debate that should be spread out to even more people,” Says Lasse Perrild in the article.

And Inger Støjberg’s massive present on Facebook is of great importance to her party Venstre. The digital study executed by Nextwork and Analyse & Tal showed, that Inger Støjberg was accountable for 25 percent of the Facebook interaction with Venstre and its politicians. Lasse Perrild points out that the political view of Venstre that the population is met with on Facebook is consequently massively dominated by Inger Støjberg’s opinions and posts. The study also shows that Støjberg has a unique ability to appeal to those who normally vote for the Danish right-wing party DFand that she is by far Venstre’s greatest channel to reach those voters.

Nextwork recommendations for policy on disinformation, as presented to the European Parliament

Thomas Albrechtsen will be a contributor and speaker at a conference called “Countering Disinformation: Democratic accountability and algorithmic transparency” hosted by the International Republican Institute.

Apart from giving a presentation, Thomas will be handing over a list of recommendations for potential legislation within this policy area. By now, Nextwork has build a pool of knowledge and understanding within the field of disinformation, and for this specific cause we have gathered our key points on how best to uncover and handle future influence campaigns.

There are three main points to our list of recommendations:

  1. Social media platforms have to work with authorities and civil society actors to unveil disinformation by sharing more data
  2. Social media have to be transparent as regards how they practice censorship, how their algorithmic recommendation feeds of suggested content work, and what adds are run on their platform
  3. Policy makers should be given fundamental training in the data-flows of social media, and the potential risks and pitfalls during an election

Our recommendations can be accessed in their full length here: Countering Disinformation

Web 3.0 will be a gamechanger for businesses

Blog post originally published on Version2 on August 14th, 2018

Cited on Techtobia: “Straight From the Tech Experts: What Will the Defining Feature of Web 3.0 Be?” on September 24th, 2018

Web 3.0 will be a gamechanger for business leaders and brand managers in the years to come

Web 3.0 will be a gamechanger for business leaders and brand managers in the years to come

Many of us probably dropped the jaw a bit when Facebook recently lost 13 billion USD as a consequence of the scandal surrounding the improper management of personal data. We have since then furthermore seen large telecom companies declare hand on heart that they will not sell customers’ location data anymore. These two strikes of lightning have made two things lightning clear: 1) The stock market understands the value of storing and making use of personal data. 2) Firms understand in all seriousness the risk associated with not handling personal data properly. Data security – and information security – will therefore be a key factor for companies going forward. Combined with other elements, this will give firms in the digital world a position which makes it possible to continue – and to an even larger degree – utilize and make use of user data.

Personal data has most certainly become ‘the new asset class’. And with GDPR – the General Data Protection Regulation – data ownership has irreversibly been given to the end user. Or the data source so to speak. As a consequence, it will be hard for the huge databases from the Web 2.0 paradigm to continue their as-is-practice, i.e. where the big tech companies to a large extent could privatize and monopolize the economic value of personal data. With Web 3.0, personal information storage will become much more decentralized!

“Business leaders should be wary of Web 3.0, because this ‘user-utopia’ is here and it is happening!”

We can already see examples of platform companies providing users with secure encrypted personal data libraries from which the user can share data with the companies he or she trusts and want to interact with on a case-by-case basis. This will also have the implication that in order for companies to gain permission to use the end user’s data, they will need to focus much more on their online databrand e.g. credibility, likeability, brand proposition, raison d’être (the big why), USP’s, communication, data policies and data ethics, clarity of speech in the permission statements, benefits, user-friendliness, convenience, transparency etc. They should not least have a data-centric approach to IT security including what is called Data-Centric Audit and Protection (DCAP), which is specifically about data security rather than firewalls, networks, software-systems and hardware – not to forget data culture. With Web 3.0. we will see a de-monopolized Internet, which will be further accelerated by the Internet of (every)Things and the successful websites of Web 3.0 will be permeated by a well thought out databrand. Business leaders should be wary of Web 3.0, because this ‘user-utopia’ is here and it is happening!

However, many can claim to know prophecies of the future and have far foreseeing vision. So let’s review rationally at what we know – let’s wander down the PEST-road (political, economic, social and technological).

“The power dynamic between firms and users has experienced a shock treatment and we have barely begun to grasp what has actually happened”

Political focus on privacy and user-control
We have seen great political and regulatory enthusiasm about increasing privacy. Nothing indicates that this will not continue… We have just now seen and experienced GDPR which gives the citizen comprehensive owner-rights over data. GDPR also makes it possible for the citizen to take action against firms, on the basis of Article 20, and have his or her data transported between firms. And GDPR is not just a Europe phenomena – it’s setting the global standard right now especially for businesses with cross-boarder operations. A new law was passed in California by the end of June 2018 which is now the most extensive regulation of data management in the US. This also gives the citizen opportunity to sue firms if data breaches occur. The PSD2 regulation is another driver behind the de-centralization of data stores in the financial sector and it is likely that a PSD2 for insurance will also be implemented one day soon. The power dynamic between firms and users has experienced a shock treatment and we have barely begun to grasp what has actually happened.

Economic growth in data-generating devices and new data-business models
The stock market has, as mentioned, definitely started to react to the importance of personal data and privacy mishaps.

“…the cost of managing data as part of the business model have risen and so has the associated liabilities”

And with GDPR, the current ratios for valuing network based Web 2.0 companies will likely change. Ratios such as Average Revenue Per User (ARPU) and Monthly Active Users (MAU) are not very sufficent ratios for company valuation after GDPR. Simply put, the cost of manageing data as part of the business model have risen and so has the associated liabilities. With increased risk associated with Web 2.0 business models, the rationale for embarking on more web 3.0-like business models will be higher for many companies and institutions (for more on valuing the effects of privacy regulation on the cost base, please read this great article by GDPR and privacy analyst Chiara Rustici). IoT is also estimated to generate far more than 14.4 trillion USD globally in 2020, and this omens that the amount of data points will increase significantly with IoT. As a consequence, we can count on seeing many more firms which offer secure and encrypted storage of those personal data already in 2019. These will change the data infrastructure as we currently know it. Right now, it is primarily capitalistic activists who are driving forward Web 3.0, but the majority of the market will catch on eventually as we will begin to be able to meassure the immense liabilities associated with the current data business models.

Social consciousness of the value of personal data and changing user demands
Generally speaking, we are seeing an increased focus on personal data both in news columns and on screens, which contribute to increasing the awareness of personal data among citizens. The GDPR fines that the whole world is waiting for with anticipation will be an unfortunate catalyst for further user awareness regarding ones own personal data. The MyData movement is on a rise and great things are happening in fx Helsinki and France. ‘Personal Data Week’ as well as other similar international campaigns have also been held in the US in 2018. A quick search on the internet shows that unions and consortiums, such as, The Personal Data Trade Association and Personal Data Ecosystem Consortium, are popping up everywhere. These initiatives create attention, put focus on and lobby for citizens to be more enabled and empowered to claim their rights over their data. But more importantly, the MyData movement repressents a powerful philosophy about data ownership og retaliation mechanisms empowering the data owner – the individual. The new systems and infrastructures that are coming will also create new user expectations to data security and the opportunity for the user to claim his or her data. The data marketplace is taking shape and data brokers are emerging. We currently see companies responding to this by re-considering their ‘My Site’ universe on their web pages, making personal data much more transparent and actionable for the end-user. And from here, the right to data portability will eventually be enabled.

“It is very likely that we will see users demanding high ethics in terms of data-usage, user-friendliness, and security regarding their information”

Just like how Amazon and Netflix has changed our patience when it comes to movie and tv-shows – don’t we all remember the 60-seconds-long menu- and copyright warning sequences, which were a custom in the DVD universe. It is very likely that we will see users demanding high ethics in terms of data-usage, user-friendliness, and security regarding their information.

Technologies promoting privacy and control
The growth within Privacy-Enhancing Technologies (PET) is in full swing. Encryption technology is gradually as advanced as Usain Bolts leg muscles and the technology is being used in several places. The development within Cryptography and Differential Privacy has made important progress. This technology is about preventing loss of privacy whilst the same data is being used aggregated where they create value, improve AI, and create innovation and new products.
There you have it…

“Web 3.0 is about baking a larger data-pie and maximizing the benefits by inviting the user to have a seat at the table”

But Web 3.0 should not be seen as a threat for the giant tech-firms’ existence as much as it should be seen as a rescue operation. Because while the practices of the tech-giants in Web 2.0 might be useful for some it is not sustainable in the long run. They simply cannot continue ahead with the digital decency and dignity parked at home in the startup-garage because of the risk of a big techlash that lurks in the horizon. And this will be harder on the big tech-firms but worst of all on all of us who sit in the dark if the doors close and the lights are out until the generators have cooled off.

Anyways, all business leaders should look out for this opportunity for developing new competitive advantages in regards to the data economy. Web 3.0 is about baking a larger data-pie and maximizing the benefits for all by inviting the user – the data subject – to have a seat at the table.

Google re-considering moving into China

Google re-considering China

Follow me on social media:

Thomas Albrechtsen invited to join the SIRI Commission, as it investigates how to strengthen the digital immune defense of the democracy

As the monopoly of truth is abolished, public belief and opinion becomes volatile. Multiple human as well as AI actors are competing to obtain influence. How do we best equip politicians, people and press to handle and regulate the flow and quality of information in a digital age?

Nextwork CEO, Thomas Albrechtsen, has been invited to join an expert group under the Danish SIRI Commission by virtue of his work with disinformation and democracy in a digital age. In the months ahead, Thomas will be working with the commission to develop a range of recommendations on how to handle the digital disruption of information and news.

About the SIRI Commission

The SIRI Commission is a think tank founded by Danish politician Ida Auken and the Danish Society of Engineers. The commission works to map out the changes, challenges, and potentials of artificial intelligence and digital disruption in a broad sense. The commission addresses multiple secondary themes one after the other, and sets up a working group of experts for each theme.

Thus far, the commission has dealt with the themes transportation, health and fintech. The next theme of the commission is called “AI, media and democracy”. The permanent members of the commission and the appointed expert group will present its recommendations by February 2019.


More about the work group and the theme (Danish):

More about the work of the SIRI Commission (Danish):

Read a discussion piece by Thomas Albrechtsen, published in the Danish newspaper Politiken, on the issue of conveying truth in a digital age (Danish):

Could Russian interference impact the next Danish election?

At Nextwork we advise a diverse group of clients on how to navigate and utilize a rapidly changing and increasingly digital media universe. Denmark as a democratic state will be facing a major digital challenge, as the next general election approaches: How can we make sure, that the election is determined by the Danish voters, without interference by foreign agents?

Over the past few years, we have seen multiple examples of foreign states trying to influence national elections using adds, fake news websites, internet bots and trolls etc.

At this year’s People’s Political Festival (Folkemøde) at Bornholm, Nextwork and Analyse & Tal have invited representatives from Google, Facebook, The European Political Strategy Center, and the Danish media industry to debate the magnitude of the issue, and what protective measures are being taken in order to sustain our digital democracy. We will be investigating previous cases, as well as the the risks and responsibilities regarding foreign political interference with the next Danish election.

Host: Thomas Albrechtsen, CEO, Nextwork

The event will take place on June 15th at 14:30, at the stage named “Det Akademiske Kvarter”, zone F14, Allinge, Bornholm.

Read more, and sign up for the event through Facebook (Danish):

Do data rights come with data obligations?

Nextwork CEO Thomas Albrechtsen and partner of Analyse & Tal Tobias Bornakke challenge the general perception of public digital data in a new article published by Danish newspaper Dagbladet Information.

The two authors argue, that we ought to consider a more solidary approach to the concept of sharing data. The social turn in the perception of public data is simple: all major agents should contribute to the pool of public data, in order to minimize monopolization and support research, transparency and democratic development.

When it comes to data, everyone seems to worry exclusively about their individual right not to be measured, tracked, filed, and observed. While this concern is honest and understandable, our fear of being monitored has so far led us to allow for further monopolization of the data collected by tech giants such as Facebook and Google. This, along with our resistance towards collection and pooling of public data by other organizations, will greatly weaken research within medicine, social science, infrastructure, communication, and language.

Can we justify our aversion against public data, when the scientific development and understanding of our social interactions and communities are suffering? Does it make us feel more safe, that the global pool of digital data is exclusively for tech giants to utilize? According to Albrechtsen and Bornakke, it shouldn’t. 

While everyone agrees, that data should be managed safely and with proficiency, and that GDPR is a step towards increased data security, a lot of professionals lament the further monopolization, which the GDPR, and recent violations of data regulations, have brought about. Scientists and researchers are suffering, as tech giants are manifesting their absolutism by locking away data in their servers.

Capitalist and totalitarian states are already benefiting from the possibilities of extensive data-collection. More effort should be put into developing socially sustainable and democratic data systems in Europe. The most well-functioning democratic states should lead the way and explore the possibilities of safe collection and use of public data. Albrechtsen and Bornakke also believe, that the only way to break the data monopoly, is to have tech giants make their data publicly available for responsible agents to scrutinize.

Read the article here (Danish):


Lasse Perrild announced as one of top 100 business talents of 2018

At Nextwork we have a knack for spotting talent. Last fall, we were pleased to appoint Lasse Perrild as the first Head of Digital at Nextwork. It has long been obvious to us, that Lasse holds a unique talent for understanding the political agendas of the digital age, and providing useful, forward-thinking consultancy to a diverse group of clients.

Now, the Danish corporate world has officially recognized this talent, in nominating Lasse as one of top 100 business talents of 2018. This year, involvement with the digital agenda has been a deciding criterion for the nomination of talents. At Nextwork we don’t hesitate in saying, that by engaging Lasse, we have gotten hold of one of the biggest digital talents, Denmark has to offer.

Lasse Perrild is former Senior Press Officer to the Major of Employment and Integration in Copenhagen. At age 26 he was appointed Special Advisor to the Minister of Employment, Ida Auken, and later he also served as Special Advisor to the Minister of Taxation, Holger K. Nielsen. Moreover, he has worked as Press Coordinator and Speechwriter to the CEO of the Confederation of Danish Industry, Karsten Dybvad.


Read the portrait following the nomination here:

How to combine big and ‘thick’ data – new paper in Big Data & Society

Big data enables us to crunch millions of data points, thereby adding scale of behaviour when studying social phenomena such as customer behaviour. ‘Thick’ ethnographic data, on the other hand, enables us to calibrate and contextualize the big data findings, adding the why to unexplainable patterns within the big datasets. Therefore, it can be very fruitful to combine big and thick data sources, and recent works have suggested an analytical complementarity. These works have, however, remained as programmatic suggestions, leaving us with limited methodological inputs on how to archive such complementary integration.

In a new paper, published in the prestigious journal Big Data & Society, our Head of Research, Brian Due argue for a method for ‘blending’ big and thick analytical insights and presents four strategies that can be applied when relying upon big and thick data sources. The paper is co-authored with Analyse & Tal’s Tobias Bornakke in it relies on insights from multiple joint Nextwork and Analyse & Tal projects.

The paper can be accessed and downloaded for free on the Sage Journal’s website.

Nextwork CEO: Authorities will have to fight for their status as conveyors of reliable information

Nextwork CEO Thomas Albrechtsen encourages authorities to contribute strategically to the available information on social, digital media platforms. 
In a discussion piece published by the Danish national newspaper Politiken, Thomas Albrechtsen explains how the power to construe reality will be allotted those who understand and utilize the accessible information technology. The monopoly of truth is abolished as innumerable sources, platforms, and online communities arise. So how do we prevent secluded online communities of truth and meaning?
All indications are, that social media platforms such as Facebook and Youtube cannot be trusted to sort out falsehoods, demagoguery, hate speech, and unreliable sources. We have the necessary technology to identify the key arenas, actors, and issues of public debate. It’s about time authorizes take interest in the creation of public opinion on social media, and take up their role as an active contributor to the competing narratives regarding health, environment and minorities.

Political Top Advisor becomes Head of Digital at Nextwork

Starting October 1st, Lasse Perrild, will be the first Head of Digital at Nextwork. Lasse will be responsible for the increasingly important task of helping corporations and organizations understand and act in a media landscape that is changing at an accelerating speed.

CEO, Thomas Albrechtsen:

“Lasse Perrild is one of the biggest talents in Danish political consultancy. Only a few people at his age have his level of experience from the Parliament, the Government, The City Hall and The Confederation of Danish Industry. Our clients are asking for advice on how to adapt to the changes that are happening around them – changes that are happening faster than ever with social and digital media. Organizations, now more than ever, are experiencing a need to break down reactions and interactions in order to understand political agendas. For this, Lasse is a perfect fit, and we are very proud to be able to serve our clients with his skills and capacity“.

Head of Digital, Lasse Perrild:

“Nextwork is an extremely interesting consultancy where services is being developed in accordance with the latest academic research. This creates a unique platform for helping clients solve the problems they are facing in a changing media landscape. Understanding social media and navigating through the big data tsunami that strikes us these days will be one of the most important differentiating parameters for companies, organisations and parties in the near future. Here, Nextwork is an extremely strong partner for their clients and I’m really looking forward to be a part of their team”.  

Lasse Perrild, age 32, is former Senior Press Officer to the Major of Employment and Integration in Copenhagen. At age 26 he was appointed Special Advisor to the Minister of Employment, Ida Auken, and later he also served as Special Advisor to the Minister of Taxation, Holger K. Nielsen. Moreover, he has worked as Press Coordinator and Speechwriter to the CEO of the Confederation of Danish Industry, Karsten Dybvad.

Thomas Albrechtsen, CEO, Nextwork

Lasse Perrild, Head of Digital, Nextwork

New website!

We are thrilled to present a repainted website. Please take a look around!

Thomas Albrechtsen announced as one of Berlingske’s top 100 business talents

We are thrilled to annouce that our CEO, Thomas Albrechtsen, was chosen by Berlingske Business as one of the rising stars of the Danish Corporate world in 2017. The list includes talents who were “extensively nominated by business and other interested parties”, and is a testament to Thomas’ hard work and dedication to Nextwork.

Amongst other qualities, Thomas was nominated for his humanistic approach to analyzing Big Data, which emphasizes insights that allow us to understand human nature and behavior at scale. 

As noted by our director of board, Jesper Højberg: “For me, with 30 years of experience as a consultant, it is impressive to watch Thomas advise customers at a level that is totally unique for such a young person. That he at the same time is able to develop novel research methods, publish academic articles, and can call himself an author, is simply just admirable.«

Brian Due receives academic award

Brian Due recently received the PhD award of the union Kommunikation og Sprog. The award is given every other year to “shed light on academic research with a business-relevant scope and also to reward a talented, young researcher.”

We are proud to have Brian on our team!

The award was given on November 14th 2016 at Carlsberg Academy. You can read more about the prize here (in Danish).

Brian Due presents Google Glass-findings at Columbia University

Our Head of Research and Assistant Professor at the University of Copenhagen, Ph.D. Brian Due, has spent the last year studying the future of wearables and their implication for social interaction.

His paper, ”Knowledge and Epistemic Incongruences in Social Interaction with Google Glass” will be presentet at the Fifth Meeting of the Language and Social Interaction Working Group (LANSI) at Columbia University on the 16th of October.

The paper deals with a participant’s use of Google Glass in social interaction with regard to object-orientation and identity; how the use of Google Glass is a private experience, which produces epistemic incongruence; and how Google Glass is a non-human participant, who occupies slots in the sequential unfolding of turns.