Artificial Intelligence is perveding every aspect of our life: it is present in decisions made in healthcare, transportation, finances, education, and governmental level, just to mention a few players. With it, explanation for high-stakes decision making (XAI) is at the center of current debates, receiving much (and deserved) attention from industry and academy alike. AI is essentially a sociotechnical system, where a decision maker interacts with various sources of information and decision-support tools, a process whose quality should be assessed in terms of the final, aggregated outcome —the quality of the decision— rather than assessing only the quality of the decision-support tool in isolation (e.g., in terms of its predictive accuracy and standalone precision). It is therefore important to develop tools that explain their predictions in meaningful terms, a property rarely matched by AI systems available in the market today. The explanation problem for a decision-support system can be understood as a trade-off between what algorithms can safely ignore and what meaningful information should absolutely be included to make an informed decision. Thus, the problem of XAI is intertwined with epistemic and normative problems, such as trustworthiness (what sort of knowledge we have and what can safely be ignored), comprehensibility (human meaningfulness of the explanations), and accountability (humans keeping the ultimate responsibility for the decision). In this context, several epistemological, ethical, and legal questions emerge, such as: what is the logic for a successful XAI? how can we ensure reliability and trust? which ethical concerns emerge in the context of (un)successful explanations? Is current XAI complying with regulations? The successful candidate will work in the intersections of epistemological and ethical issues, with a strong emphasis on the normative aspects of explanatory AI. Different ethical frameworks and principles will be studied to understand the implications of XAI and the role of different stakeholders in high-stake decision makings (developers, users, individuals affected by system’s actions, legal experts, etc.). Case studies will also be focus of this project, as they will engage relevant stakeholders to capture adherence to social, legal, ethical norms, and, ultimately, human responsibility.
This PhD project will be developed within the EU Horizon2020 SoBigData++. SoBigData++ strives to deliver a distributed, Pan-European, multi-disciplinary research infrastructure for big social data analytics, coupled with the consolidation of a cross-disciplinary European research community, aimed at using social mining and big data to understand the complexity of our contemporary, globally-interconnected society. SoBigData++ is set to advance on such ambitious tasks thanks to SoBigData, the predecessor project that started this construction in 2015. Becoming an advanced community, SoBigData++ will strengthen its tools and services to empower researchers and innovators through a platform for the design and execution of large-scale social mining experiments. It will be open to users with diverse background, accessible on project cloud (aligned with EOSC) and also exploiting supercomputing facilities. Pushing the FAIR principles further, SoBigData++ will render social mining experiments more easily designed, adjusted and repeatable by domain experts that are not data scientists. SoBigData++ will move forward from a starting community of pioneers to a wide and diverse scientific movement, capable of empowering the next generation of responsible social data scientists, engaged in the grand societal challenges laid out in its exploratories: Societal Debates and Online Misinformation, Sustainable Cities for Citizens, Demography, Economics & Finance 2.0, Migration Studies, Sport Data Science, Social Impact of Artificial Intelligence and Explainable Machine Learning. SoBigData++ will advance from the awareness of ethical and legal challenges to concrete tools that operationalise ethics with value-sensitive design, incorporating values and norms for privacy protection, fairness, transparency and pluralism. SoBigData++ will deliver an accelerator of data-driven innovation that facilitates the collaboration with industry to develop joint pilot projects, and will consolidate an RI ready for the ESFRI Roadmap and sustained by a SoBigData Association.
• Master’s degree or equivalent in philosophy or similar discipline.
• Candidates with interests in analytical philosophy (e.g. ethics of algorithms, ethics of AI, ethics of technology, philosophy of action) and strong affinity (or degree in) philosophy of science, epistemology, philosophy of technology, philosophy of engineering and computer science are strongly encouraged to apply.
• Strong interests in interdisciplnary research, principally with computer scientists.
• Excellent command of written and spoken English.
• Excellent communication skills and interested in translating research ideas and findings for the benefit of non-academic stakeholders (e.g. managers and policymakers).
• The ability to work independently and as a team player.
CONDITIONS OF EMPLOYMENT
Fixed-term contract: 4 years.
TU Delft offers PhD-candidates a 4-year contract, with an official go/no go progress assessment after one year. Salary and benefits are in accordance with the Collective Labour Agreement for Dutch Universities, increasing from € 2325 per month in the first year to € 2972 in the fourth year. As a PhD candidate you will be enrolled in the TU Delft Graduate School. The TU Delft Graduate School provides an inspiring research environment with an excellent team of supervisors, academic staff and a mentor. The Doctoral Education Programme is aimed at developing your transferable, discipline-related and research skills.
The TU Delft offers a customisable compensation package, discounts on health insurance and sport memberships, and a monthly work costs contribution. Flexible work schedules can be arranged. For international applicants we offer the Coming to Delft Service and Partner Career Advice to assist you with your relocation.
Technische Universiteit Delft
Delft University of Technology (TU Delft) is a multifaceted institution offering education and carrying out research in the technical sciences at an internationally recognised level. Education, research and design are strongly oriented towards applicability. TU Delft develops technologies for future generations, focusing on sustainability, safety and economic vitality. At TU Delft you will work in an environment where technical sciences and society converge. TU Delft comprises eight faculties, unique laboratories, research institutes and schools.
Faculty Technology, Policy and Management
The Faculty of Technology, Policy and Management (TPM) makes an important contribution to solving the complex technical and social challenges that we face as a society. Challenges such as energy, climate, mobility, IT, water and cyber security. This requires a multidisciplinary approach that goes beyond technology. Our education and research are therefore at the intersection of technology, society and management. We combine insights from the engineering sciences with those from the humanities and social sciences. We develop robust models and designs, are internationally oriented, and have extensive networks in science and practice.
For information about this vacancy, you can contact Juan M. Durán, email: j.m.duran[at]tudelft.nl.
For information about the selection procedure, please contact Mrs. Anita van VIanen, HR Advisor, email: vacature-tbm[at]tudelft.nl.
When you are interested in this position, please include in your application: (a) CV, (b) motivation letter including names of two references, (c) list of publications in a single pdf entitled “TPM20.038_YourLastname.pdf”. Send your application to vacature-tbm[at]tudelft.nl. Applications will be considered until June 15, or until the position is filled. Due to the Covid 19 measurement a remotely start is possible.