Internal Conference & Industrial Engagement Event

Internal Conference & Industrial Engagement Event 2015

**This event has now passed. The page is still available for reference, below the Keynote Presentation Slides.**

Keynote Presentation Slides


Internal Conference & Industrial Engagement Event 2015

Tuesday 2nd June – Wednesday 3rd June 2015, Informatics Forum, The University of Edinburgh, 10 Crichton Street

Click here to view the full programme as a PDF: Internal Conference & Industrial Engagement Event Programme

This two-day event is part academic conference, part industrial engagement opportunity.

It will feature keynotes by academic staff and presentations by senior PhD students on topics spanning the research spectrum of the School of Informatics. The first cohort of CDT in Pervasive Parallelism students will give short talks on their research.

In addition to the presentations, the event will include a networking reception with academic posters and industry partner booths, as well as the second Industrial Advisory Board meeting.

The conference will provide many opportunities for students, staff and industry partners to network and learn about one another’s work. Attendees will include:

  • CDT in Pervasive Parallelism (PPar) students, supervisors and management committees
  • Industry partners of the CDT PPar
  • Students and supervisors of the CDT in Data Science and the CDT in Robotics & Autonomous Systems
  • School of Informatics research and teaching staff
  • PhD, MSc and final-year undergraduate students from the Schools of Informatics, Physics, Engineering and Mathematics and the Edinburgh Parallel Computing Centre at the University of Edinburgh

Please RSVP via the Eventbrite page.
N.B.: This is a private event, by invitation only. If you would like to attend but haven’t received an invitation, please email ppar-cdt @

If you will need accommodation, we would advise booking as soon as possible, as June is a very busy time for hotels in Edinburgh. We have arranged discounted rates at multiple hotels, and a number of rooms are being held for a limited time period.

Please see the Accommodation and Travel Advice page for more information.

Internal Conference & Industrial Engagement Event Agenda


2 June 2015
Time Session
12:00-13:00 Registration and Lunch
13:00-13:05 Welcome from CDT PPar Director Prof Mike O'Boyle
13:05-14:45 Research Presentations – Session 1:
– Keynote: Professor Philip Wadler 
– PhD students: Stan Manilov + Nikolay Bogoychev
14:45-15:15 Refreshment Break
15:15-17:00 Research Presentations – Session 2:
– Keynote: Professor Steve Renals
– PhD students: William Ogilvie + Arpit Joshi
17:00-18:30 Reception with Company Booths and Posters from  Informatics, Physics and Mathematics PhD Students
19:00 Dinner – By Invitation
3 June 2015
Time Session
08:45-09:15 Light Breakfast Available
09:15-11:00 Research Presentations – Session 3:
– Keynote: Professor Nigel Topham
- CDT PPar students: Mark Miller, Davide Pinato,
  Martin Rüfenacht, Galini Tsoukaneri, Justs Zariņš
11:00-11:30 Refreshment Break
11:30-13:00 Research Presentations – Session 4:
– Keynote: Dr Vittorio Ferrari
- CDT PPar students: Chris Cummins, Simon Fowler,
  James Ganis, Adam Harries, Artemy Margaritov
13:00-14:00 Networking Lunch
14:00-15:00 Industrial Advisory Board Meeting

All presentation titles and abstracts for the longer presentations are below.

Pervasive Parallelism, Edinburgh.

Keynote Speakers

Professor Philip Wadler, Chair of Theoretical Computer Science – “The Inevitable Coincidence: A Basis for Concurrency and Distribution”

Professor Steve Renals, Chair of Speech Technology – “(Deep) Neural Networks for Speech Recognition”

Professor Nigel Topham, Chair of Computer Systems – “Many Cores Make Light Work”

Dr Vittorio Ferrari, Head of CALVIN Research Group on Visual Learning – “Visual Learning and Recognition at CALVIN”

Abstracts and biographies are below.

PhD Students

Stan Manilov, PhD Student in Computer Systems – “Free Rider: A Tool for Retargeting Platform-Specific Intrinsic Functions”

William Ogilive, PhD Student in Computer Systems – “Intelligent Heuristic Construction with Active Learning”

Nikolay Bogoychev, PhD Student in GPGPUs and Machine Translation – “GPGPU for Machine Translation Decoding”

Arpit Joshi, PhD Student in Computer Architecture – “Efficient Persist Barriers for Multicores”

CDT in Pervasive Parallelism Students

Chris Cummins – “Dynamic Autotuning of Algorithmic Skeletons”

Simon Fowler – “Monitoring Distributed Erlang/OTP Applications with Multiparty Session Types”

James Ganis – “Online Parameter Tuning for Parallel Particle Filters”

Adam Harries – “High Performance Code Generation for Graph Algorithms on GPUs”

Artemy Margaritov – “Streaming Branch Direction Predictor for Data Centre Processors”

Mark Miller – “Fast and Parallel Relationship Descriptors for Interactive Motion Adaptation”

Davide Pinato – “Analysing the Impact of Rule Misses in a SDN Data Center Environment”

Martin Rüfenacht – “Message Passing Using Direct Memory Access Hardware”

Galini Tsoukaneri – “On the Feasibility of Inferring User Paths in Anonymized Crowdsourced Data”

Justs Zariņš – “Markov Chain Monte Carlo and miniapps”

Keynote Speaker Abstracts and Biographies


Professor Philip Wadler, Chair of Theoretical Computer Science
Title: “The Inevitable Coincidence: A Basis for Concurrency and Distribution”

The principle of Propositions as Types is an inevitable coincidence: independently formulated notions of logic and computation turn out to be identical. Under it, propositions correspond to types, proofs to programs, and simplification of proofs corresponds to evaluation of programs. It is a robust notion, adapting to almost all areas of computation, but with one important exception: concurrency! I will explain what Propositions as Types means, why it is important, and why concurrency and distribution may at last be about to benefit from this approach. And I will explain how this links to a broader research effort on extending the benefits of types to communication by introducing session types.

Philip Wadler is Professor of Theoretical Computer Science at the University of Edinburgh. He is an ACM Fellow and a Fellow of the Royal Society of Edinburgh, past chair of ACM SIGPLAN, past holder of a Royal Society-Wolfson Research Merit Fellowship, and a winner of the POPL Most Influential Paper Award. Previously, he worked or studied at Stanford, Xerox Parc, CMU, Oxford, Chalmers, Glasgow, Bell Labs, and Avaya Labs, and visited as a guest professor in Copenhagen, Sydney, and Paris. He has an h-index of 60, with more than 18,000 citations to his work according to Google Scholar. He contributed to the designs of Haskell, Java, and XQuery, and is a co-author of Introduction to Functional Programming (Prentice Hall, 1988), XQuery from the Experts (Addison Wesley, 2004) and Generics and Collections in Java (O’Reilly, 2006). He has delivered invited talks in locations ranging from Aizu to Zurich.

Professor Steve Renals, Chair of Speech Techology
Title: “(Deep) Neural Networks for Speech Recognition”

Neural networks have become a very hot topic in speech technology, with recent work on neural network acoustic and language modelling extending the state-of-the-art, and attracting an extraordinary amount of interest. This talk will give an overview of current work in the area, making links with work done since the late 1980s, while showing what is new. In particular, I’ll discuss how neural networks can learn suitable representations for distant speech recognition based on multichannel input and approaches to adapting neural networks to different domains, speakers, or acoustic conditions. I’ll finish by talking about some current challenges that could drive future work in neural networks for speech recognition.

Steve Renals is professor of Speech Technology and director of the Institute for Language, Cognition, and Communication in the School of Informatics, at the University of Edinburgh. Previously, he was director of the Centre for Speech Technology Research (CSTR). He received a BSc in Chemistry from the University of Sheffield in 1986, an MSc in Artificial Intelligence from the University of Edinburgh in 1987, and a PhD in Speech Recognition and Neural Networks, also from Edinburgh, in 1990. From 1991-92 he was a postdoctoral fellow at the International Computer Science Institute (ICSI), Berkeley, and was then an EPSRC postdoctoral fellow in Information Engineering at the University of Cambridge (1992-94). From 1994-2003 he was lecturer, then reader, in Computer Science at the University of Sheffield, moving to Edinburgh in 2003. He has over 200 publications in speech and language processing, and has led several large projects in the field, including EPSRC Programme Grant Natural Speech Technology and the AMI and AMIDA Integrated Projects. He is a senior area editor of the IEEE/ACM Transactions on Audio, Speech, and Language Processing and a member of the ISCA Advisory Council. He is a fellow of the IEEE, and a member of ISCA and of the ACM.

Professor Nigel Topham, Chair of Computer Systems
Title: “Many Cores Make Light Work”

The central idea behind pervasive parallelism is that parallel computing is a key enabling technology for a wide variety of future computing platforms. To be truly pervasive it must enable not only the obvious high-end compute platforms, but also find widespread use in deeply embedded devices. This talk begins with a brief overview of diverse strands of parallel systems research in the University of Edinburgh’s School of Informatics. We then discuss the silicon technology trends that are shaping the evolution of next-generation embedded systems, and fuelling the impetus for many-core embedded systems. We then present a disruptive example of many-core embedded computing to illustrate the principles and justify the claim that many-core architectures have a key role to play in low-power and mobile computing. Our example is high-bandwidth optical wireless communication (OWC), an emerging application that is known for its high computational requirements, similar to those found in software-defined radio. In order to facilitate the deployment of OWC in embedded devices supporting the Internet of Things (IoT), it must be delivered within a small energy envelope. This talk outlines the challenges posed by OWC in this context, and describes how current silicon trends and OWC challenges have shaped the design of a 32-core chip recently implemented by the PASTA research group in the School of Informatics at Edinburgh.

Nigel Topham, FREng, is the Professor of Computer Systems and was the founding director of the Institute for Computing Systems Architecture (ICSA) at the University of Edinburgh. As CTO and Chief Architect for ARC International (LSE:ARK.L), he led the development of microprocessors that are now shipping in billions of silicon chips per year. He designed the EnCore processor, within the EPSRC PASTA project, and this is now commercially available for licensing via a world-leading international IP provider. He was the original author of ArcSim, arguably the fastest processor simulator in commercial use today. Nigel has served on the editorial boards of the Journal for Instruction Level Parallelism, Microprocessors and Microsystems, the IET journal CDT, is a Distinguished Reviewer for ACM TACO, and has served on numerous top-tier program committees including ISCA, MICRO, HPCA, PACT, DAC and ICPP and  ICCD.  Nigel has been a regular invited speaker at Microprocessor Forum both in Silicon Valley and in Japan. His current research interests include low-power many-core processors, their use embedded and server applications, and novel memory architectures for many-core systems.

Dr Vittorio Ferrari, Head of CALVIN Research Group on Visual Learning
Title: “Visual Learning and Recognition at CALVIN”

A key goal of computer vision is to interpret complex visual scenes, by recognizing visual concepts, localizing them, and understanding their interactions within the scene. To achieve this we need powerful visual learning techniques to acquire rich models capturing the diversity of the visual world. In this talk I will give an overview of recent research on visual learning and recognition at the CALVIN group, with a focus on reducing the amount of human supervision necessary to learn visual concepts. I will also stress the importance of the resurgence of Neural Network techniques in our field. These are particularly suitable for parallel computing architectures such as GPUs, which are becoming essential to modern computer vision research.

Vittorio Ferrari is a Reader at the School of Informatics of the University of Edinburgh, where he leads the CALVIN research group on visual learning. He received his PhD from ETH Zurich in 2004 and was a post-doctoral researcher at INRIA Grenoble in 2006-2007 and at the University of Oxford in 2007-2008. Between 2008 and 2012 he was Assistant Professor at ETH Zurich, funded by a Swiss National Science Foundation Professorship grant. In 2012 he received the prestigious ERC Starting Grant, and the best paper award from the European Conference in Computer Vision for his work on large-scale image auto-annotation. He is the author of over 70 technical publications. He regularly serves as an Area Chair for the major computer vision conferences and he will be a Program Chair for ECCV 2018. He is an Associate Editor of IEEE Pattern Analysis and Machine Intelligence. His current research interests are in weakly supervised learning of object classes, semantic segmentation, and large-scale auto-annotation.

PhD Student Abstracts


Stan Manilov, Second Year PhD Student in Computer Systems
Title: “Free Rider: A Tool for Retargeting Platform-Specific Intrinsic Functions”

Short-vector SIMD and DSP instructions are popular extensions to common ISAs. These extensions deliver excellent performance and compact code for some compute-intensive applications, but they require specialised compiler support. To enable the programmer to explicitly request the use of such an instruction, many C compilers provide platform-specific intrinsic functions, whose implementation is handled specially by the compiler. The use of such intrinsics, however, inevitably results in non-portable code. In this talk we present a novel methodology for retargeting such non-portable code, which maps intrinsics from one platform to another, taking advantage of similar intrinsics on the target platform.

We employ a description language to specify the signature and semantics of intrinsics and perform graph-based pattern matching and high-level code transformations to derive optimised implementations exploiting the target’s intrinsics, wherever possible. We demonstrate the effectiveness of our new methodology, implemented in the FREE RIDER tool, by automatically retargeting benchmarks derived from OPENCV samples and a complex embedded application optimised to run on an ARM CORTEX-M4 to an INTEL EDISON module with SSE4.2 instructions. We achieve a speedup of up to 3.73 over a plain C baseline, and on average 96.0% of the speedup of manually ported and optimised versions of the benchmarks.


William Ogilive, Second Year PhD Student in Computer Systems and Optimizing Compilers
Title: “Intelligent Heuristic Construction with Active Learning”

Building effective optimization heuristics is a challenging task which often takes developers several months if not years to complete. Predictive modelling has recently emerged as a promising solution, automatically constructing heuristics from training data. However, obtaining this data can take months per platform. This is becoming an ever more critical problem and if no solution is found we shall be left with out of date heuristics which cannot extract the best performance from modern machines.

In this work, we present a low-cost predictive modelling approach for automatic heuristic construction which significantly reduces this training overhead. Typically in supervised learning the training instances are randomly selected to evaluate regardless of how much useful information they carry. This wastes effort on parts of the space that contribute little to the quality of the produced heuristic. Our approach, on the other hand, uses active learning to select and only focus on the most useful training examples.

We demonstrate this technique by automatically constructing a model to determine on which device to execute four parallel programs at differing problem dimensions for a representative CPU–GPU based heterogeneous system. Our methodology is remarkably simple and yet effective, making it a strong candidate for wide adoption. At high levels of classification accuracy the average learning speed-up is 3x, as compared to the state-of-the-art.

Please RSVP via the Eventbrite page. If you will need accommodation, please see the Accommodation and Travel Advice page.