Winter School
Open Data Citation for Social Science and Humanities
The main topic is the evolution of publication issues in social sciences and humanities, in a context of Open Access, with the underlying goal of promoting open science through the question of open data citation.
- Session 1Introduction
- Session 2Open Critical Edition. The Missing Link Between Digital Humanities and Open Science
- Session 3Data Management Plan
- Session 4Persistent Identification
- Session 5Evaluation Acknowledgement and Credit Circulation
- Session 6Case studies
- Session 7Data Journals & Editorialization of Open Data
- Session 8Economy of Open Access & Open Data Publication
- Session 9Infrastructure & Platform
- Session 10Social impact
Introduction
Welcoming remarks
Lucie Doležalová
The introduction is chaired by Lucie Doležalová who is the local organiser of this event. She works as Associate Professor of Medieval Latin at the Institute of Greek and Latin Studies of the Faculty of Arts, and at the Communication Module of the Faculty of Humanities both Charles University in Prague. She is also a researcher at the Centre for Medieval Studies of the Academy of Sciences of the Czech Republic.
Mirjam Friedová
Dean of Faculty of Arts, Charles University.
Marek Skovajsa
Vice-dean for research, Faculty of Humanities, Charles University.
Emiliano Degl’Innocenti
DARIAH in Italy & European landscape of digital humanities.
Research data in SSH: Joachim Schöpfel
Abstract
Research data in SSH
The presentation will investigate the relationship between data and text in different document types, in social sciences and humanities. It will introduce different categories and types of research data, make the link with research fields and methods, and add some comments about differences between SS&H and STM. It will also question the future of the distinction between documents and data, in the environment of content mining. Other issues will be addressed for further discussion: sharing and reusage of data; impact and evaluation of data; the link between document, data life cycle and the research process; identification, curation and preservation; and the function of data in the new context of open science. What is data? What is NOT data? What is functional and dysfunctional in the field of data management and data publishing? May be that at the end there are more open questions than answers.
Speakerfor this session
Joachim Schöpfel
Joachim Schöpfel is director of the French National Centre for the Reproduction of Theses (ANRT) and scientist in information and communication sciences at the University of Lille (France). From 1999 to 2008, he was head of the library and document delivery of INIST (CNRS). He holds a PhD in Psychology of the University of Hamburg (Germany) and has signed several publications and communications on scientific information, documentation and job development, see CiteULike and the French national LIS repository. He is member of the editorial board, peer reviewer and evaluator of different journals, collections and organizations. His research interests are related to open access, grey literature, ETDs, open data, scientific communication and library development. He is member of the GERiiCO laboratory on information and communication sciences (Lille), the Council for Documentary Information of the Free University of Brussels, the International Advisory Board of the Project COUNTER.
Open Critical Edition. The Missing Link Between Digital Humanities and Open Science
Abstract
Open Critical Edition. The missing link between Digital Humanities and Open Science
Transparency, interoperability, free and open access are values commonly shared by Digital Humanities projects. But the mere publication and display of content on the web is not enough to make a project part of Open Science. As a new way of doing science by allowing the users to process the underlying data of a publication with tools, instead of just perusing it, Open Science requires Open Data and Open Process on top of Open Access. In this session, we will explore how digital editions, historically a most important part of DH, can bridge the gap between Open Access and Open Science. Using examples of digital editions based on the Text Encoding Initiative (TEI), we will see how to integrate the reflection on Open Data in an edition project as early as the conception phase, in close relationship with the theory about a text that is, in the end, a critical edition – a theory based on data.
Speakersfor this session
Marjorie Burghart
Marjorie Burghart is a research fellow at the CNRS (French National Center of Scientific Research) and she is working in the CIHAM UMR 5648 research center in Lyon and is specialised in medieval history and computer science. She is an elected member of the board of directors of the Text Encoding Initiative (TEI) consortium, the scientist in charge for the EHESS partner of the Erasmus SP+ Digital Edition of Medieval Manuscripts, and also the scientist in charge for the EHESS partner of the DIXIT (Digital Scholarly Editions Initial Training Network) Marie Curie european project. She has published several papers and softwares, and is involved in differents projects of electronic edition of medieval documents in TEI format.
Emmanuelle Morlock
Emmanuelle Morlock is a digital humanities research officer at the French National Center for Scientific Research (CNRS) and currently works at HiSoMA, a research center dedicated to Archaeology and Philology of the Ancient Worlds. Her main mission is to assist researchers in their application of information technologies and solutions for scholarly editions of ancient texts and inscriptions. Her activities include project ownership assistance and technical implementation of online publications (metadata modeling, definition of encoding strategies, TEI framework implementation, information architecture and digital curation of research data). She was educated in France where she studied French literature and received a Master’s Degree in Information Science and Documentation from SciencePo Paris.
Data Management Plan
Abstract
Research data management planning: a chance for Open Science. Methods and tutorials to create a Data Management Plan
With the growth of the Open Science movement in the past few years, researchers have been increasingly encouraged by their home institutions, their funders, and by the public, to share the data they produce. A new model of data sharing is emerging, and this issue is becoming more and more crucial for the scientific community and for national and international research policy. As shown by the OECD in 2007, public granting agencies hope that publicly funded research projects would give access to the data produced within their work, in order to provide new resources for economic development. And with the extension of the Open Research Data Pilot in Horizon 2020, H2020 beneficiaries have to make their research data “findable, accessible, interoperable and reusable (FAIR)”, and are therefore asked to provide a Data Management Plan (or DMP) to this end.
More than a constraint, this new model of openness brings direct benefits for researchers. Sharing their data allows the researchers to organise and retrieve them effectively, to ensure their security, to collaborate with fellow researchers within the same discipline or from other disciplines, to reduce costs by avoiding duplication of data collection, to make easier validation of results, to increase the impact and visibility of their research outputs.
Speakersfor this session
Marie Puren
Marie Puren also contributes to the IPERION H2020 project, especially by upgrading its Data Management Plan. After being a lecturer and a responsible for continuing education projects at the Ecole nationale des chartes, Marie Puren has been a visiting lecturer in Digital Humanities at the Paris Sciences et Lettres (PSL) Research University. Her main publications belong to fields including intellectual history of the XXth century, French studies and digital humanities. Marie Puren has been awarded a Ph.D. in History at the Ecole nationale des chartes – Sorbonne University. She holds Master’s degrees in History and Political Science from the Institut d’Etudes Politiques de Paris, and in Digital Humanities from the Ecole nationale des chartes.
Charles Riondet
Charles Riondet, History PhD and archivist, is also involved in H2020 EHRI project as a metadata and standards specialist, with a focus on archival metadata (EAD, EAC-CPF).
Marie Puren and Charles Riondet, Ph.D., are junior researchers in Digital Humanities at the French Institute for Research in Computer Science and Automation (INRIA) in Paris, members of the Alpage laboratory (INRIA – Paris Diderot University). As collaborators to the PARTHENOS H2020 project, they focus their research on the development of standards for data management and research tools in Arts and Humanities, and they currently work on the creation of a Data Management Plan for this project.
Persistent Identification
Persistent identifiers: Ondřej Košarko
Abstract
Persistent identifiers.
The proliferation of datasets and services available online invites researchers to link them from their works. A link to a service, that makes it possible to explore the data in question yourself, might be more valuable than a picture. But these types of online resources and/or the infrastructures they live in are constantly evolving. Which effectively leads to dead links or links to a different version of the resource. PID systems can help with keeping the locations of resources up to date as well as store information about what the resource is.
Speakerfor this session
Ondřej Košarko
Ondřej Košarko is a programmer working at the Institute of Formal and Applied Linguistics (UFAL), Prague, Czech Republic. He is one of the developers behind LINDAT/CLARIN repository. The repository is based on DSpace and has been modified to meet the needs of CLARIN centers. This modified version is now deployed in several member institutions. He is also responsible for parts of shortref.org, a tool to ease persistent data citation, and various other bits and pieces like this guide for choosing the adequate licence.
Canonical Text Services: Matthew Munson & Christopher Blackwell
Abstract
Canonical Text Services.
Canonical Text Services (CTS) is a protocol for identification and retrieval of passages of text by means of machine-actionable citations in URN form. CTS is not bibliographic database or commentary framework, but a protocol intended to serve use-cases like those. CTS consists of a specification for URN citations, and a specification for a service-protocol. CTS was created for the Homer Multitext to address that project’s need to integrate (a) an open-ended diversity of texts, (b) many specific versions of the same text, some digital, some in print, and some in manuscript, many fragmentary, (c) at arbitrary levels of abstraction (“Iliad Book 2”) or specificity (“The third letter sigma at Iliad 1.2 on the Venetus A manuscript”), (d) with the assumption that technologies for storage, retrieval, and display will change completely during the project’s lifetime. This presentation will introduce CTS as a possible model for persistent identifiers in a large-scale, distributed digital library. The first part of the presentation will offer an overview of the protocol, the CTS URN citation scheme, and the CTS Service requests, with attention to applications and limitations of CTS. The second part will present how the Open Greek and Latin project of the University of Leipzig is implementing CTS and the tools it is making available for editors, publishers, and consumers of CTS texts.
Speakersfor this session
Matthew Munson
Matthew Munson received an MA from the University of Virginia in Religious Studies, his thesis studying the use of the Greek word for law (νόμος) in the letters of the Apostle Paul. Before joining the Digital Humanities Team, he worked at the Scholars’ Lab at the University of Virginia and in the DARIAH project at the Göttingen Centre for Digital Humanities at the University of Göttingen, Germany. He is currently working on his PhD in Theology in Leipzig studying the automatic extraction of semantic data from biblical texts and the automatic tracking of semantic drift between corpora.
Christopher W. Blackwell
Christopher W. Blackwell holds a B.A, summa cum laude from Marlboro Collect in Vermont, USA. He holds a Ph.D. from Duke University, where he was the William H. Willis Fellow in Classics. Since 1995 he has been on the faculty of Classics at Furman University in South Carolina, USA. He served as Chair of the Classics Department for 14 years, until 2015, and is currently the Louis G. Forgione University Professor. Since 2001 he has been Project Architect, with Neel Smith, of the Homer Multitext, a project of the Center for Hellenic Studies of Harvard University under the editorship of Casey Dué and Mary Ebbott. With Smith, Blackwell is co-creator of the Canonical Text Services protocol and the CITE Architecture for identification and retrieval of scholarly resources by canonical citation in networked environments. Blackwell has led several digitization projects and has collaborated with scholars in the U.K., Italy, Germany, the Netherlands, Greece, and Croatia. He has published two books on the history of Alexander the Great, and articles on topics in Classics, Computer Science, Intellectual Property Law, and Botany.
Evaluation Acknowledgement and Credit Circulation
Open peer review & Open commentary: about an experiment: Julien Bordier
Abstract
Open peer review & Open commentary: about an experiment.
For five months, the OA journal Vertigo experimented both open peer review and open commentary devices within its scientific blog. While the first consisted strictly in opening a classical pre-publication review process, the second was inviting the whole “scientific community” to comment pre-publications in order to improve them before submission. In both of the two devices, every reviews, comments and annotations are accessible to everyone online, as the authors, reviewers and commentators names. The sociologist in charge of this project will present the details of the experiment, its main results – need of human mediation, technical possibilities and limitations – and will try to raise the questions and potentialities issued by new forms of reviewing in academic publishing.
Speakerfor this session
Julien Bordier
Julien Bordier, sociology PhD, independent scholar, editorial adviser, works on public-space issues. He conducted the open peer review experiment for OpenEdition / Centre pour l’édition électronique ouverte.
Simplifying license selection: Ondřej Košarko
Abstract
Simplifying license selection.
The necessity to share and preserve data and software is becoming more and more important. Without the data and the software, research cannot be reproduced and tested by the scientific community. Making data and software simply reusable and legally unequivocal requires choosing a license for data and software which is not a trivial task.
Speakerfor this session
Ondřej Košarko
Ondřej Košarko is a programmer working at the Institute of Formal and Applied Linguistics (UFAL), Prague, Czech Republic. He is one of the developers behind LINDAT/CLARIN repository. The repository is based on DSpace and has been modified to meet the needs of CLARIN centers. This modified version is now deployed in several member institutions. He is also responsible for parts of shortref.org, a tool to ease persistent data citation, and various other bits and pieces like this guide for choosing the adequate licence.
European Network for Research Evaluation in the Social Sciences and Humanities: Ioana Galleron
Abstract
Data informed research evaluation: challenges of data collection and data standardisation in the SSH.
Evaluation has always been perceived as a difficult area for the SSH for a number of reasons. One of the problems is the fact that the most common procedures have been fine-tuned to the so-called hard sciences and as such are ill adapted the SSH disciplines. While abundant information exists about research practices, disciplinary biases and dissemination traditions in STEM fields, the situation is at least contrasted between Nordic and Southern countries with regards to the monitoring of the research production/outputs in SSH disciplines. This presentation will briefly introduce the COST Action CA15137, dedicated to the creation of a network of evaluators for the SSH disciplines, then will focus on the needs and the challenges of data collection for an informed peer evaluation of the SSH.
Speakerfor this session
Ioana Galleron
Ioana Galleron is a Senior lecturer in French language and literature. Her research interests are the French theater of the 17th and 18th century, as well as the evaluation of research in the SSH. She is involved in several projects of electronic edition of plays (see http://www.licorn-research.fr/Boissy.html), and in a research group of the consortium CAHIER, dedicated to computer-assisted literary analysis. Since April 2016, she is the Chair of the COST Action CA15137.
Case studies
OpenEdition: towards a European infrastructure for open access publication in humanities and social sciences: Pierre Mounier
Abstract
Towards a European infrastructure for open access publication in humanities and social sciences.
OpenEdition gathers four platforms for open access publications in humanities and social sciences: journals, books, scientific programs and academic blogs. Based in France, OpenEdition initiated specific programs in several European countries to be able to offer an international and multilingual infrastructure, currently disseminating online and open access more than half million academic documents coming from more than twenty countries in 14 languages. OpenEdition aims now at developing a distributed European wide infrastructure with 19 partners. Named OPERAS, this new initiative will foster cooperation at European scale and help humanities and social sciences join the common effort for the development of Open Science.
Speakerfor this session
Pierre Mounier
Pierre Mounier is deputy director of OpenEdition, a comprehensive infrastructure based in France for open access publication and communication in the humanities and social sciences. OpenEdition offers several platforms for journals, scientific announcements, academic blogs, and, finally, books, in different languages and from different countries. Pierre teaches digital humanities at the EHESS in Paris. He has published several books about the social and political impact of ICT, digital publishing and digital humanities.
Czech Literary Bibliography: Vojtěch Malínek
Abstract
The Czech Literary Bibliography.
The aim of this paper is to give a short presentation about Czech literary bibliography research infrastructure, its activities in the last years and its plans for the future. The stress will be put on RETROBI software, developed as a result of project of digitasation of card catalogue of so called Retrospective bibliography of Czech literature 1770-1945. RETROBI software enables fulltext and semistructured searching in OCR representations of original catalogue cards and offers features for online editing and indexing of the data. Afterwards the possibilities of using CLB data for statistical and quantitative research in the field will be presented.
Speakerfor this session
Vojtěch Malínek
Vojtěch Malínek, Institute of Czech Literature of the Czech Academy of Sciences
Turning the Polish Literary Bibliography into a Research Tool: Challenges, Standards, Interoperability: Maciej Maryl
Abstract
The Polish Literary Bibliography.
This presentation will discuss the research project aiming to transform a vast database of Polish Literary Bibliography (PBL) into a fully operational, digital research infrastructure for the study of Polish literature and culture of 20th century. The project entails retrodigitisation and transformation of the existing records into a coherent database as well as the development of data analysis tools for literary researchers. PBL is a specialized bibliography containing records about various types of materials concerning literature and literary scholarship (e.g. literary works, books, journals, magazines, articles, documents, dramas, movies, TV programs, conferences, awards, etc.), which are annotated in the unique semantic framework. In that respect it is similar to other national projects such as ABELL (Annual Bibliography of English Language and Literature). The online database contains records for 1988-2002 with printed volumes covering the period 1939-1987. In my presentation I would like to focus on the following issues:
- Challenges: the methodological problems of dealing with data collected during a long stretch of 60 years, including the conversion of OCR’d scans into a database.
- Standards: choosing the right ontology for the data and mapping our resources onto it.
- Interoperability: plans to link the resources with LOD cloud and other bibliographies (hopefully with the Bibliography of Czech Literature too).
Speakerfor this session
Maciej Maryl
Deputy Director, Institute of Literary Research of the Polish Academy of Sciences.
Creation of Open Data Resources: Benefits of Cooperation: Kira Kovalenko & Eveline Wandl-Vogt
Abstract
Creation of Open Data Resources: Benefits of Cooperation.
In the presentation we a going to discuss about cooperation between Austrian Centre for Digital Humanities (Austrian Academy of Sciences) and Institute for Linguistic Studies (Russian Academy of Sciences). As a result of the collaboration, three projects are going to be developed: digital version of the Dictionary of Russian Dialects, electronic collection of Russian manuscript lexicons and a database of the Russian plant names (11-17 cc.). All the projects will be implemented with the use of cutting-edge technologies and will be available online.
Speakerfor this session
Kira Kovalenko
Kira Kovalenko, Institute for Linguistic Studies (Russia) & Austrian Centre for Digital Humanities (Austria) & Eveline Wandl-Vogt, Austrian Centre for Digital Humanities (Austria).
Network of Dutch War Sources: pursuits and goals: Tessa Free
Abstract
The Network of Dutch War sources.
In Holland, there are around four hundred organizations keeping a collection from or about the Netherlands in the Second World War. The program ‘Network of Dutch war sources’ (Netwerk Oorlogsbronnen) intends to make the geographically scattered sources digital findable and usable. We do that by engaging in or leading small projects with several participating organizations. For example creating a second world war – thesaurus, implemented in collection management systems. The use of OCR and NER-techniques to make millions of documents accessible on a document level. And adding persistent geographical codes to sources to enhance searching with a focus on location.
The Network of Dutch War sources is a program of ‘NIOD institute for war holocaust and genocide’. See www.oorlogsbronnen.nl for more information about the program.
Speakerfor this session
Open access meets productivity: “Scholarship, see effect of being an efficient source”: Adele Valeria Messina
Abstract
How do we use an EBSCO database? How, without difficulty, can an article be found?
The primary talking point of it is about the efficiency of Open Access in social sciences and humanities. The talk will therefore introduce the issue of a case study: “the method of online academic reviews and the alleged delay of post-Holocaust Sociology”. The presentation will confer about this method, halfway between hemerographia and metasociology, and about the measurement of some important indexes, such as “the speed of publication” of research and “the scientific impact” of it on the academic public. It argues that open access and usability of data need to be understood as more than simply a kind of digital research. The project will support the circulation and connecting of data and will be linked with well versed institutions.
This can happen best when there are energetic institutional means for the researchers: when they, to all intents and purposes, try to claim what they want is when digital democracy becomes challenging, as it is now, in Europe.
Speakerfor this session
Case Studies on digital content reuse in the context of Europeana Cloud: Eliza Papaki
Abstract
The use of digital content has, over the past couple of decades, become almost the norm for many researchers within the Humanities and Social Sciences. Curation of both digitised legacy data and born-digital content, however, makes it imperative that items are managed at an individual level in order for larger collections of data to be trusted and useful. Europeana is shifting focus from being a discovery portal of over 30 million digitised items to a platform that allows third parties to develop tools based on its content. In order to gather information about the potential use of existing collections in Europeana, research was conducted into developing an empirically-based, comprehensive list of User Requirements. Investigations included current data reuse within the sector, the quality of the content itself and identification of topics with which Europeana can be of most use.
In our investigations through the Europeana Cloud project, we took the approach from both users and providers. Topics were selected for trial using Europeana’s current content, and other potential resources, both of which were subjected to questioning: how useful was the data to them?; what tools or services could be used with it; what were the failings of the content and how might that be overcome? In this presentation, two of these topics have been selected as case studies: Conflict-related Population Displacement; and Children’s Literature.
Speakerfor this session
Data Journals & Editorialization of Open Data
Abstract
Do we still need peer-review? Datajournals as a way of reconsidering our evaluation culture and our understanding of research.
Never had scholars had to write so many applications and so many reviews as nowadays. Peer-review has been institutionalized as the central regulation mechanism of the two core activities that are formulating a research question and its workflow on the one hand and criticizing its results on the other hand. Still, most scholars are deeply unsatisfied by a system in which they feel like they never really get to “do” research, but are rather stuck in a vicious circle of unproductive evaluations. While evaluation is perceived by scholars as more and more disconnected from research itself, the datajournal model developed by DARIAH in the context of the episcience platform is aiming at re-harmonizing research and evaluation, allowing to integrate peer reviews as a contribution to the research and development process of an online resource, in a continuous (virtuous) feedback loop. The session will allow to address both benefits and challenges of datajournals, aiming more widely at initiating a constructive dialogue on publication and evaluation structures in the digital age.
Speakersfor this session
Anne Baillot
Anne Baillot was a trainee civil servant at the École Normale Supérieure in Paris between 1995 and 1999. She completed her PhD in 2002 in Paris. Since then, she has been living in Berlin where she worked as a post-doctoral researcher at various institutions. Between June 2010 and January 2016, she was junior research group leader at the Institute of German Literature of Humboldt University, funded by the DFG (German Research Foundation). As a junior group leader, Anne Baillot is the editor of Letters and Texts: Intellectual Berlin around 1800. Since 2013, Anne has been a member of the editorial board of fr.hypotheses and en.hypotheses, and since 2015, Anne has been a board member of the German DH association (DHd) and of the European Society for Textual Scholarship. She blogs about her research in English on http://digitalintellectuals.hypotheses.org/ and tweets as @AnneBaillot. Since February 2016, she has joined Laurent Romary’s team and is working at the interface between research, infrastructure and cultural heritage institutions. She is Managing Editor for the Journal of the Text Encoding Initiative and is working towards developing new models for journals in the scholarly ecosystem. Her next book (to appear 2017) is dedicated to the relationships between writers and publishers between the late 18th and early 20th century in Germany.
Marie Puren
Marie Puren is junior researcher in Digital Humanities at the French Institute for Research in Computer Science and Automation (INRIA) in Paris, members of the Alpage laboratory (INRIA – Paris Diderot University). As collaborators to the PARTHENOS H2020 project, she focuses her research on the development of standards for data management and research tools in Arts and Humanities, and she currently works on the creation of a Data Management Plan for this project. Marie Puren also contributes to the IPERION H2020 project, especially by upgrading its Data Management Plan. After being a lecturer and a responsible for continuing education projects at the Ecole nationale des chartes, Marie Puren has been a visiting lecturer in Digital Humanities at the Paris Sciences et Lettres (PSL) Research University. Her main publications belong to fields including intellectual history of the XXth century, French studies and digital humanities. Marie Puren has been awarded a Ph.D. in History at the Ecole nationale des chartes – Sorbonne University. She holds Master’s degrees in History and Political Science from the Institut d’Etudes Politiques de Paris, and in Digital Humanities from the Ecole nationale des chartes.
Economy of Open Access & Open Data Publication
Economic Models for Open Access Publications: Pierre Mounier
Abstract
Economic models for Open Access publications
The development of open access in humanities and social sciences faces a major challenge: sustainability. Whereas in STM disciplines, the new dissemination paradigm means shifting from reader-pays model to author-pays model, the infamous “APC“, in humanities and social sciences that type of reconfiguration is simply not possible for many good reasons. Moreover, the dissemination of knowledge in those disciplines is mostly done through books and not solely in journals, which entails additional complications. Therefore, those who want to develop open access in SSH have to find their own solution, that fits with their specific ecosystem. Whether it be based on donations, grants, subscriptions, in kind contributions, crowdfunding or “freemium” model, the are many ongoing experimentations under development. A landscape of the different models and the main trends on the topic will be presented.
Speakerfor this session
Pierre Mounier
Pierre Mounier is deputy director of OpenEdition, a comprehensive infrastructure based in France for open access publication and communication in the humanities and social sciences. OpenEdition offers several platforms for journals, scientific announcements, academic blogs, and, finally, books, in different languages and from different countries. Pierre teaches digital humanities at the EHESS in Paris. He has published several books about the social and political impact of ICT, digital publishing and digital humanities.
Repository-as-a-Service: An Experimental Model for the Sustainable Curation and Funding of Large Niche Corpora in the Humanities: Patrick Flack
Abstract
Repository-as-a-Service: An Experimental Model for the Sustainable Curation and Funding of Large Niche Corpora in the Humanities.
sdvig press is a non-profit academic publishing platform dedicated to supporting the dissemination and linking of knowledge in the Humanities between Eastern, Central and Western Europe. One of its central mission is to give visibility and provide structured access to large corpora of texts from Russia, Poland, the Czech Republic or the Baltic states, relating in particular to important epistemological paradigms of the Humanities such as structuralism, phenomenology, or critical theory. This objective implies not only the high-quality digitisation of textual sources, but also their translation at least into English. Given the still obscure nature of these corpora, however, it is hard to find anything more than punctual funding, and there is no prospect of even mild commercial success to finance these tasks in the systematic, long-term perspective that they require.
The solution explored by sdvig press is the development of thematic platforms that integrate Central and Eastern European corpora into better defined, more visible and more international contexts. We are developing three prototypes, of which the Open Commons of Phenomenology is the most advanced (the other two are Structuralica and Pacem). Each of these platforms, at first, is conceived mainly as an exhaustive bio-bibliographical repository providing structured access (ideally) to all sources and references in its thematic field. Access to contents is wholly unrestricted, but a number of tools (advance search, lists, visualisations, etc.) are made available only to libraries through a subscription.
Speakerfor this session
Patrick Flack
Patrick Flack is the managing director of sdvig press, an open access, non-profit academic publishing house. He is also associate member of the Central-European Institute for Philosophy (Czech Academy of Sciences, Prague). Since completing his PhD in 2011 (Comparative Literature, Charles University in Prague), he has worked in Helsinki, Leuven and Berlin as a post-doctoral researcher funded by the Swiss National Scientific Foundation. His research focuses on structuralism and a trans-cultural, interdisciplinary approach to its historiography. With sdvig press, he is currently developing a number of open access thematic platforms – such as the Open Commons of Phenomenogy – designed to function as sustainable infrastructural and communication hubs for their respective scientific communities. The development of these platforms is linked directly with international institutions (Husserl Archives, Czech National Library, etc.), embedding their research projects, archival holdings and editorial outputs.
Infrastructure & Platform
Contrasting Platforms and Infrastructures as Configurations for Data Sharing: Jean Christophe Plantin
Abstract
Contrasting platforms and infrastructures as configurations for data sharing.
This talk will discuss the impact on scholarship when data sharing is increasingly organized by social media platforms. It does so by contrasting these entities with existing data infrastructures that acquire, curate, and process these data for archiving and further dissemination. The analysis of routine, procedures, and everyday work of data processing staff at a social science data archive will provide elements to detail the “regime of care” that define infrastructure towards research data, and how it contrasts with digital intermediaries in organizing data circulation.
Speakerfor this session
Jean-Christophe Plantin
Jean-Christophe Plantin is Assistant Professor at the London School of Economics and Political Science, department of Media & Communications. He investigates the civic use of mapping platforms, the collaborative challenges in big data science, and the evolution of knowledge infrastructures. His research was funded by the Alfred P. Sloan Foundation, the Gordon and Betty Moore Foundation, the European Regional Development Fund, and the University of Michigan MCubed Program. His work was published in New Media & Society, Media, Culture & Society, and the International Journal of Communication.
Huma-Num: A French Infrastructure for Open Research Data in Humanities: Nicolas Larrousse
Abstract
Research Data Dissemination And Preservation: A Vision From Huma-Num, A French Infrastructure Dedicated To Humanities.
In the field of Humanities and Social Sciences, the production of digital or scanned data has increased considerably in recent years. These data, which are usually very expensive to produce, are often lost at the end of the project. They are therefore rarely reused, due to a lack of financial, human and technical resources of the communities that produced them. This talk will present the general approach, both technical and educational, used by Huma-Num infrastructure to address these issues.
Speakerfor this session
Nicolas Larrousse
Nicolas Larrousse is head of the long term archiving department at Huma-Num, a French infrastructure which aims to provide services to researchers in social sciences and humanities. He is particularly focused on interoperability and is involved in European infrastructures and projects. Huma-Num is promoting collaboration and providing services to manage, enrich and expose research data through a wide network of partners and consortia. Huma-Num is the National Coordinating institution of DARIAH European infrastructure for France and is involved in H2020 European projects.
Social impact
Abstract
Infrastructure for an age of Global Philology.
This paper discusses core services and use cases for an infrastructure that seeks to support work on any historical language by speakers of as many modern languages as possible.
Speakerfor this session
Gregory Crane
Gregory Crane is an Alexander von Humboldt Professor of Digital Humanities at Leipzig University. He is a specialist in classical philology and computer science. He completed a doctorate in classical philology at Harvard University and worked as an assistant professor. He has the reputation of being a pioneer of digital humanities due to his development of the Perseus Digital Library, a freely accessible online library for antique source material. He was associate professor at TUFTS University and is now Winnick Family Chair of Technology and Entrepreneurship. He has received, among other awards, the Google Digital Humanities Award 2010.