Tag Archives: Research

Can Open Source Hardware Disrupt Manufacturing Industries?

It is with great pleasure to announce that the book is out in which Piet and I have a chapter:

Spaeth, Sebastian, and Piet Hausberg. “Can Open-Source Hardware Disrupt Manufacturing Industries? The Role of Platforms and Trust in the Rise of 3D Printing.” In The Decentralized and Networked Future of Value Creation: 3D Printing and Its Implications for Society, Industry, and Sustainable Development, edited by Jan Peter Ferdinand and Ulrich Petschow. Progress in IS. Berlin: Springer, 2016.
The preproof is on SSRN and you can get the PDF from there too. What is weird is that some freelance journalist seems to have discovered it and loosely retold it (without giving any credit). If he had at least changed the title and not copied it verbatim, I might not even have discovered his plagiarism.

Microsoft buys Nokia

Nokia’s 98,000 employees [1] will probably not be delighted to learn about today’s announcement to sell the core of Nokia, its mobile phone business and mapping services for a total of 5.44 billion € to Microsoft.

According to the press release, the transferred units represent ca. 50% of Nokia’s revenue. The low price tag (€3.79bn cash plus €1.65bn for patent licenses) is amazing: In 2011, Nokia had a turnover of $50.18 billion (2012: $39.91bn). Compare that price/revenue ratio to those of other recent technology company acquisitions (or IPOs) and you’ll see how desperate Nokia has become in a very short time.

The man behind the last three years strategy is Steve Elop, who was hired away from Microsoft to lead the ailing firm back to the upper ranks of mobile phone manufacturers in 2010. He has been called a mole and a trojan horse before (indeed, already at the press conference at his inaugeration), an accusation that he has denied. His immediate strategy of focusing solely on Microsoft phone as the single operating system, was heavily critisized by Ex-Nokia executive Tomi Ahonen among many others (see, e.g. The Elop Effect). Elop is now expected, as the MS press release states, to transfer back to Microsoft.

The series of events is peculiar at very least, I have to say. If it looks like a trojan horse, walks like one and quacks like one, ….

As a result of its failed strategy, Nokia has not been doing well during the last years. In the second quarter of 2013 Nokia has been shipping 7.1m Windows smartphones, which is puny compared to Android smartphone sales of Samsung (73.3m), LG (12.1m), Lenovo (11.4m), Huawei (10.2m), or ZTE (10.2m) during the same time frame. [2]

So what is now left of Nokia? The network unit Nokia Solutions and Networks, formerly known as Nokia Siemens Networks and which was founded in 2007 as a joint venture. However, core network hardware is coming under severe pressure as Chinese manufacturers such as Huawei are gaining ground.
There seems to be a mapping division left (but not their apps), as well as a division for Advanced Technologies.

So far, consumers and technology firms have not warmed up to Microsoft on their mobile phones. It will be interesting to watch if they will do so now that Nokia and Microsoft are under one roof. And it will be interesting to see what the rest of what is still Nokia will become in the following years (my guess is that it will be snatched up by a Chinese company?).

The reactions have been interesting, from techcrunch.com predicting Elop’s ascendency to the throne of Microsoft, to sad eulogies. Nokia’s share price jumped from 2.96 to 4.28 on the day of the announcement.

[1] src: Heise.de
[2] src: IDC Worldwide Mobile Phone Tracker, August 7, 2013

A case of plagiarism at ETH

Much has been written about the increasing levels of article retractions during the last months. Often, peers, reviewers, and student mentors have been criticized for not doing their job. I believe that we do have a responsibility for bringing these cases to light. However, as long as organizations – afraid of negative publicity – fail to enforce strict penalities, the problems will remain.

My previous employer, ETH, just has withdrawn the degree of a student after it found a case of plagiarism. However, she can re-enroll, reclaim her course credits and rewrite her thesis with a new topic. WOW, what a punishment. It reminds me of the story of the 1980s Budapest railway system, where the penalty for dodging the fare was … the price of a regular ticket. I have heard similar stories of my previous alma mater, the HSG, although I cannot confirm these from my personal experience.

Detecting plagiarism as a mentor of a thesis, or as the reviewer of a paper is hard, it is really hard work. Why would I invest my time and go through the hassle, if I know that fraud will not have serious consequences?

I have been waiting for the report of an Research misconduct committee dragging out their report for more than a year, while the retraction count of a German professor skyrockets to at least 12 retracted articles with no report and no consequences in sight. The same university only took a month to give an already retired professor a mild slap on the wrist for publishing a student’s work as article without any mention of said student. Why would I expose myself accusing peers when nothing is going to happen?

I do not aim to punish people for minor errors ("only those who don’t work do not make mistakes"). And I certainly do not appreciate the digging out of 30 year old dissertations just for the sake of detecting plagiarism. But organizations should face uncomfortable realities and implement harsh penalities once serious fraudulent activities are detected and made public.

Economist seminars on public goods and generosity

Today, I went to a seminar by, from, and for Economists!

Karin Nyborg (University of Oslo) talked about “Cooperation is Relative: Framing and Income Effects with Public Goods”, finding that “rich people” contribute to public goods independent of reward, while poor were self-interest driven. Interesting stuff.

This is relevant as we are also looking at contributions to a public good in Collaborative Open Innovation.

Describing standard Linear Public Good game. Decide how much to keep for you, how much to give to the group. The group income is then multiplied by some factor and split between the group.
.. Payoff x_i = e_i – c_i + m(Sum(c_k)1/N
Double blind, no one will know who has decided how. (not even lab personnel)

Highly endowed (rich) students get 10€, the poor one 5€. Project suceeds only if contributions reach 120NOK (half of all endowments).

The decision variable of how much to contribute is changed in three treatment groups. It is 1) Absolute, that is, “I decide to give 10NOK to the group”, it is 2) Relative, “I give 30% of my endowment to the group”, or 3) Payoff (“I keep 70 NOK for myself”).

As a result 1) and 2) are basically the same. 84%, or 80% “succeeeded”, ie reached the treshold, the payoff group was only 67%. Wow, it matters how you formulate your question.

Rich people gave the same in all cases (1,2,3). Poor people contributed most in the “Absolute” case (40NOK), less “Relative” (30NOK) and least in the payoff case (10NOK or so). Overall, it was a nicely and well done experiment and quite cool, overall. I could well imagine doing something similar.

Sollte man das Handelsblattranking boykottieren?

Momentan ist eine heisse Diskussion im Gange, ob man das Handelsblattranking boykottieren sollte. 291 BWLer haben einen offenen Brief von Margit Osterloh und Alfred Kieser unterzeichnet, der das Ranking kritisiert. Kritisiert wird vor allem die Methodik und die falsche Anreizwirkung auf Wissenschaft und Gesellschaft.

Schwierige Fragen. Zum einen kann und sollte man niemanden zwingen, keine Rankings von öffentlichen Daten zu erstellen. Oder sollte sich diesen entziehen dürfen. Insofern, finde ich ein Boykott eines solchen Rankings nicht gerade befürwortenswert. Sollen Publikationslisten etwa geheim gehalten werden?

Auf der anderen Seite muss die Nützlichkeit solcher Rankings hinterfragt werden. Das HB sagt dass sie ein Tool liefern. Wie es angewendet oder missbraucht wird, ist Sache des Anwenders. Das Argument kennen wir (“guns don’t kill. People do”), aber dass die Existenz solcher Tools einen Einfluss auf das Verhalten Einzelner hat, ist unbestreitbar.

Und ja, ich kenne genug Forscher deren Forschungsagenda sich nach Publizierbarkeit in Journals gerichtet hat. Und bei weitem nicht alle waren junge Foscher ohne lebenslange Anstellung. Ich bemerke dass bei mir selber. Ich kenne interessante Journals, die zu Nischen publizieren die mich interessieren, die eine besonders nutzerfreundliche Open Access Philosophie haben, oder sonst interessant sind. Aber nicht im “SSCI”? Sorry, kein Interesse. Schade? Ja!

Journal Impact factor kommen und gehen. Beispiel Technovation (Elsevier Verlag, Profit Margin 37%): Impact Factor 2011: 3.3, Impact factor 2007: 1.0. Sind Technovationartikel von 2007 über die Zeit qualitativ drei mal besser geworden als noch früher (so wie bei gutem Wein 🙂 )? Im Ranking ja, denn es betrachtet nur einen einzigen Impactfactor.
Oder sind die Editors (und Autoren) nur besser darin geworden ihren IF zu managen? Ohne Self-cites wäre der journal Impact factor statt 3.3 nur 1.7, denn 48% der 424 citations sind self-cites. (Zitationen von Artikel im eigenen Journal, wie von vielen Editoren gefordert, sind ein anderes Thema, das mit der Wichtigkeit von Impact Factors einhergeht). Wo immer es Rankingsysteme und Benchmarks gibt, werden sie “gegamed”.

Im Vergleich, Organization Studies hovert seit 5 Jahren konstant bei 2.0-2.3 IF. Vielleicht hätten sie ihre Autoren einfach bitten müssen mehr eigene Werke zu zitieren? (nur 19% of 277 are self-cites).

Nicht zu vergessen, dass der Impactfactor hier sich nur auf Artikel die in den letzten 2 Jahren publiziert wurden bezieht. Die Publikationsmuster durch lange Review/Publikationsprozesse sind aber extrem unterschiedlich. Ich habe einen Artikel, der 2008 vom Journal akzeptiert und 2010 publiziert wurde. Wie soll so ein Artikel zu einem 2-jährigen Impact Factor beitragen? Mehrmals dauerte es 9-12 Monate bis ich Feedback auf eine Submission erhalten habe. Die zitierten Artikel werden dabei nicht jünger.

Am Beispiel Technovation, die cited Half-Life ist 6.0 years, d.h. mehr als die Hälfte der Citations kommen erst nach 6 Jahren (nur 10.5% der Citatations kamen aus den relevanten letzten 2 Jahren). Org Studies hat eine Citation Halbwertszeit von 8 Jahren und nur 4.5% aller cites kommen aus den letzten 2 Jahren…
Andererseits ist mir ein Physical Review Letters von Submission bis Publication in 3 Wochen gelungen. Wie soll man da vergleichen? Äpfel und Birnen gefällig?

Bei jeder Statistikgrundvorlesung die sich mit Powerlaws beschäftigt wird darauf hingewiesen, dass der Mittelwert bei Powerlaws eine denkbar ungeeigneter Wert ist um etwas Vernünftiges auszusagen. Beispiel: Die durschnittliche US Firmengrösse ist 19.0 Angestellte. Super, aber der Modus (häufigste Wert) ist 1, und der Medianwert 3 (Axtell et al 2001)! Was kann ich also aus dem Mittelwert über die meisten einzelnen Firmen aussagen? Und das sieht sehr ähnlich bei Journal Artikeln aus. Was kann ich also aus dem mittelwert eines Power-law-basierten Journal Impact Factors über einen einzelnen Artikel ablesen?

Ich habe genug grottenschlechte Artikel im SMJ gelesen und ebenso viele Perlen in no-name – oder vielmehr besser – no-impact Journals, um zu wissen, dass man vom Journal Impact Factors nicht auf die Qualität der 3-4 individuellen Artikel eines Forschers schliessen kann. Und kommt mir nicht mit double-blind Review. Ich habe genug durch die Augenbinde lunsen können um zu sehen was da passiert (aber das ist ein anderes Thema).

Um es zusammen zu fassen. Werde ich ein Ranking boykottieren? Nein. Finde ich es sinnvoll und unterstützenswert? Nicht unbedingt, und wenn dann nur sehr sehr vorsichtig und begrenzt. Sobald Rankingsysteme existieren, ändert sich das Verhalten der Teilnehmer um sie zu gamen! Insbesondere wenn diese Tools missbraucht werden um Berufungskommissionen, Lehrstuhlevaluationen und Fakultätsbudgets zu erstellen. Insofern vielen Dank an Frau Osterloh und Herrn Kieser! Normalerweise sage ich zynisch, dass solche Rankingverurteilungen immer nur von emeritierten Dinosauriern (Verzeihung 🙂 ) kommen, die nichts mehr zu verlieren haben. Deshalb freut es mich besonders, in der Boykottliste bekannte Namen von jungen aktiven, und durchaus publizierenden, Mitforschern zu lesen.

P.S. Ich kann es nicht lassen, meinen Lieblingsartikel zu Impact Factors zu verlinken (Hint, Forscher stirbt und möchte in den Himmel kommen. Witzig.) http://genomebiology.com/2008/9/7/107

[UPDATE 7.Sep.12] Das Handelsblattblog berichtet dass die meisten der Boykotteure nicht in den Top-Listen aufgetaucht wären. Natürlich fällt es leichter einen Boykott zu unterzeichnen wenn man -gerechtfertig oder nicht- nicht am meisten davon profitiert. Allein die Tatsache dass 23 von 339 Boykotteuren in den Listen genannt worden wären, zeigt dass nicht nur unproduktive Forscher sich einem Ranking entziehen wollen.

P.P.S. Bei aller Kritik am Ranking, denke immer noch, dass Forscher kein Recht haben sollten sich diesem zu entziehen. Es basiert immerhin auf öffentlichen Daten, selbst wenn die Erhebung mit Hilfe der Forscher geschieht.

Wer innovativ bleiben will, sollte kooperieren

Das ETH Hausjournal ETH Life hat einen kurzen Artikel über unsere Strategic Management Journal Publikation gebracht:

Wer innovativ bleiben will, sollte kooperieren (20 Jul 12)

Schweizer Firmen sind zwar sehr offen für externes Wissen, doch es gibt noch
Potential. Vor allem wenn es darum geht, formell mit Partnern zu
kooperieren. Denn – richtig dosiert – fördert der «Blick über den
Tellerrand» Innovationsgeist und Markterfolg, wie eine Studie von
ETH-Forschern zeigt….

Hier ist der ̀ganze <http://www.ethlife.ethz.ch/archive_articles/120720_offene_innovation_ch>`_
Artikel.

Externe Ideen flexibel sammeln und nutzen

/uploads/iomgmt_2012_cover.jpg

(c)io management 2012

Die aktuelle Ausgabe von io management ist dem Thema “Open Innovation” gewidment. Es enthält einen kurzen Artikel auf deutsch von mir und Georg von Krogh als Einstieg ins Thema Open Innovation. Er ist für Manager geschrieben und stellt open innovation vor und gibt eine kurze Übersicht. Es hat Spass gemacht zur Abwechslung mal einen kurzen Artikel zu schreiben.

Invisible college? Coincidence? Or just ugly

We have just submitted a revised version of an article to Research Policy. Looking at their very latest issue, I noticed it is a special issue on a very related topic to ours. Look at the list of authors in that special issue:

Res. Pol. Volume 41, Issue 7, Pages 1121-1282 (September 2012)

Edited by Jan Fagerberg, Hans Landström and Ben R. Martin

  • Jan Fagerberg, Hans Landström, Ben R. Martin
  • Jan Fagerberg, Morten Fosaas, Koson Sapprasert
  • Hans Landström, Gouya Harirchi, Fredrik Åström
  • Ben R. Martin, Paul Nightingale, Alfredo Yegros-Yegros
  • Samyukta Bhupatiraju, Önder Nomaler, Giorgio Triulzi, Bart Verspagen
  • Ben R. Martin
  • Howard E. Aldrich
  • Tommy Clausen, Jan Fagerberg, Magnus Gulbrandsen
  • Ismael Rafols, Loet Leydesdorff, Alice O’Hare, Paul Nightingale, Andy Stirling

2 out of 9 without direct links to the editors? Excuse me? I know that it is customary (if not strictly clean) to give the guest editors an article slot (besides the editorial). However looking at the list of authors in this special issue, it reeks of a closed small circle dividing up the special issue between themselves. I have no insights into the editorial processes of this very special issue, so things might very well have been proper and nice. But, if not, this is not the way to go about in research. Things like this upset me.

Carrots and Rainbows

Around 4 years ago, we started working on a paper that contained a literature review on individuals motivation to contribute to open source software, concluding that the frameworks being used were too narrow to capture all aspects of what was happening. We concluded that open source software development needs to be seen as a social practice, and created a framework that will allow a more holistic exploration of the interplay of motivations, practices, and institutions supporting (but also constraining and corrupting) OSS. The framework draws on the work on Social Practices by the moral philosopher McIntyre.

We submitted the article to MISQ, and 3 years and four(!) major revisions later, we just got accepted. The editors and reviewers were giving us a hard time, but they also helped to improve the paper significantly as a result of the hard work that was put into it.

May I proudly present:

Carrots and Rainbows: Motivation and Social Practice in Open Source Software Development

by Georg von Krogh, Stefan Haefliger, Sebastian Spaeth, and Martin W. Wallin

The preprint abstract is available online on this website, if you are interested in the full paper, let me know.

Abstract

Open source software (OSS) is a social and economic phenomenon that raises fundamental questions about the motivations of contributors to information systems development. Some developers are unpaid volunteers who seek to solve their own technical problems, while others create OSS as part of their employment contract. For the past 10 years, a substantial amount of academic work has theorized about and empirically examined developer motivations. We review this work and suggest considering motivation in terms of the values of the social practice in which developers participate.
Based on the social philosophy of Alasdair MacIntyre, we construct a theoretical framework that expands our assumptions about individual motivation to include the idea of a long-term, value-informed quest beyond short-term rewards. This “motivation–practice” framework depicts how the social practice and its supporting institutions mediate between individual motivation and outcome. The framework contains three theoretical conjectures that seek to explain how collectively elaborated standards of excellence prompt developers to produce high-quality software, change
institutions, and sustain OSS development. From the framework we derive six concrete propositions and suggest a new research agenda on motivation in OSS.

UPDATE: And it is finally officially out in this June issue.

David Teece lecture

Just listening to a lecture by David Teece at Goeteborgs
Handelshoegskolan
. He takes up many interesting issues, highlights
current trends… and emphasizes the importance of Capabilities.

Some random notes:

Textbooks still have not managed to take the importance of intellectual
assets into account, balance sheets are useless as they still mainly
show physical assets.

In a similar vain, Porters 5 Forces analysis has become much less
usefull, as it focuses on a single industry, and much of the value and
success hinges on the "Salience of co-specialized complementary
capabilities," that is
the combination of capabilities and assets to provide additional
competitive advantage. The example here was the ipod/itunes, both of
which complement each other and are located in different industries.

Overall, the world might be flat, but capabilities are not not equally
distributed (the form mountains and hills), therefore opportunities are
not evenly spread out.

Previous success is no guarantee for future one: Andy Grove, CEO at
Intel (at that time): "Our current market share just gives us a seat at
the table for future technologies" (got the wording wrong).

In the following Q&A session, some things came up:

Frameworks (such as porters 5 forces) are not a theory, they are lenses
that allow us to look at things. In a way they are a "poor men’s theory" 🙂

Overall quite interesting lecture. My favorite quote is one from Winston
Churchill given at Harvard University in 1943: "The empires of the
future are the empires of the mind."

The session was moderated by Maureen McKelvey.