TUG 2020 online — Program & participants

Keynote addresses

Steven is the designer of Open Sans, Droid, Liberation, Noto Sans for Latin, and many more typefaces. He has worked on a vast array of other typographic projects present in operating systems, phones, game consoles and other electronic devices.

I will give an overview of the document conversion program pandoc, with an emphasis on how it might be useful to people who are already comfortable using LaTeX to prepare documents. In the first part, I’ll discuss the use of pandoc to convert between LaTeX and other common formats, including Microsoft Word docx and HTML. In the second part, I’ll give some reasons why even a seasoned TeXnician might want to consider writing documents in pandoc’s extended Markdown instead of LaTeX, and I’ll teach some tricks that can be used to recover the tremendous power and flexibility of LaTeX in this simpler idiom.


Javier is the current maintainer and developer of Babel, a core LaTeX package originally written by Johannes Braams and the first major effort in the I18N of LaTeX. Javier has also written or contributed to many other packages for print production. He will be interviewed on camera by Paulo Ney de Souza and you are welcome to submit questions to him in advance via email to the conference committee tug2020@tug.org.

Philip is the author of Biber and current developer and main lead of BibLaTeX which made programable citations and bibliographies a reality. He will be interviewed on camera by Paulo Ney de Souza and you are welcome to submit questions to him in advance via email to the conference committee tug2020@tug.org.


Current timeCurrent local time is , as determined from your browser. All presentation times below are in time zone: . If that is not correct adjust your system setting!

Add to calendarDownload ICS calendar file to add presentation times to your calendar. Instructions here.


On the first day of the Conference, Sue and Cheryl will conduct a webinar in LaTeX. Starting with the basic principles in the morning and then proceeding to more detailed topics in the afternoon. They will cover making Indexes, Tables, Figures, Bibliographes, entering Mathematics and many other topics.

  1. Creating a LaTeX Document
  2. Creating Numbered Section Headings
  3. Creating a Section That uses Roman Numerals
  4. Still Using Roman Numbering for Sections
  5. Changing Numbering Back to Arabic
    1. Creating Subsection Headings
    2. Paragraph and Subparagraph
  6. Creating Footnotes
  7. Changing Font Styles
  8. Marking the Margin of a Paragraph
  9. Text in Columns
  10. Creating a Table of Contents
  11. Adding to Contents
  12. Itemizing, Enumerating, and Nesting
  1. Theorems, Lemmas, etc.
  2. Basic Tables
  3. Simple Mathematics and Creating Equations
    1. Subscripts and Superscripts
    2. Accents
    3. Binomial Coefficients
    4. Congruence
    5. Delimiters
    6. Operators
    7. Ellipses
    8. Integral
    9. Sum
    10. Matrices
  4. How to do Bibliographies
  5. Getting an Output


Day One

Steven is the designer of Open Sans, Droid, Liberation, Noto Sans for Latin, and many more typefaces, and has worked on a vast array of other typographic projects.

The newtx package has undergone some very substantial changes over the last couple of years, while striving to remain backwardly compatible with earlier versions. In this presentation, I’ll try to outline the motivations behind the changes and make comparisons with other general purpose LaTeX math packages. Among the codependency issues are problems in adapting a math package to fit a text font package, and, to a lesser extent, vice-versa.

The Island of TeX is a collaborative effort to provide a home to community-based TeX projects. In this talk, we will discuss our long-term goals and how the worldwide community can visit the island, have fun with the residents and help us enhance the TeX experience for everybody, from newbies to power users.

We take for granted the convenience of expressing emotions in typography for Latin-based writing, such as using capital letters. This submission explores the use of a variety of type attributes including color, typeface, size, and distortion as it is used to convey emotional charge in Hangul (Korean) and Arabic writing.

Type specimens have been produced from the earliest days of Western printing history as commercial documents designed to sell printing type. Over the course of five centuries, this publication genre has kept pace with the needs of its consumers through innovative change. This time-traveling tour will navigate through the vast holdings of the RIT Cary Graphic Arts Collection to explore the formats and features that unfurl through the history of type specimens.

Bio: Amelia Fontanel is a curator at the RIT Cary Graphic Arts Collection, a renowned library that collects on design, typography, and the book arts. As manager of the Cary technology collection, she is responsible for teaching and maintaining over 30 different presses and thousands of fonts of metal and wood type. She is actively involved in the international printing community, holding executive board positions with the American Printing History Association and the Hamilton Wood Type and Printing Museum.

I presume most of us participating in or watching the TUG2020 conference are not professional historians but rather are computing practitioners or users of computing technology (such as TeX). We have access to memories, papers, and flexibility in what we study and how we present what we learn that can't all be available to professional historians. I believe it is our job to help the professionals and the world more generally to capture computing history while we can.

In this talk, Paulo, a devoted Catholic working as a volunteer in a small parish, will narrate his typographical adventure when producing material for masses and church-related activities. It is a very challenging yet wonderful experience!

For many years I have used a local version of TeXLive and a version control system for his work. Recently I found myself in a position to work on papers with several teams having quite different degrees of knowledge of TeX and version control. Overleaf was chosen as the method to organize the collaborations. In this presentation I discuss the impressions and the lessons learned in the process.

The single most critical factor for document content to be accessible is that text can be extracted reliably and accurately. For a PDF file, the CMap structure gives a mapping of each character in a font to a corresponding Unicode code point. The pdfTeX and dvipdfmx engines have different ways to attach a CMap resource to font instances within PDF files. While it is a vital piece, the CMap is not the whole story; since the same character from the same font can be used in different ways. This is most apparent in a “fake” small-caps font, where uppercase glyphs are drawn from the same base font, but at reduced size using the virtual font mechanism. By defining a second virtual font instance, and attaching a customised CMap file and map file entry, the lowercase letters of a faked small-caps font can be correctly extracted as lowercase Latin letters.

Accented Latin letters are often constructed within virtual fonts by placing the accent first, then the base. This is counter to Unicode where the combining accent character comes after the base. By rearranging the virtual font description this order can be changed, allowing text-extraction of correctly accented characters. This fixes many difficulties with a LaTeX T1-encoded font, but some Extended Latin characters still need further consideration. Using a little-known trick of inserting DVI special commands into the virtual font description, ActualText replacement tagging can be encoded inside the virtual font, allowing constructed characters to be mapped to their proper Unicode point.

Annotation of graphics in TeX has always been a difficult subject. Solutions starting with WARMReader and passing by PSFrag and pinlabel did not have a path to more modern TeX engines (pdf-, xe- and lua-). In this talk we present a framework for moving the source files of the most common of all these packages— PSFrag— over to any other labelling desired (pinLabel, Overpic, XYOverPic and TikZ), providing a path for processing legacy content and allowing more choices in production environments.

Day Two

This talk addresses the question of how empowerment (and lack thereof) influences teaching and learning LaTeX, on the example of creating collaborative solutions between Humanities scholars and AllTeX consultants.

Noob, on teaching and learning new technologies during our PhD journey. The question on how to write a catalogue for an archaeological thesis produced the first discussions on why it is so hard for a noob or newbie to start and stay with using LaTeX. Out of the bad experiences of the happy Noob, a first blogpost (link) came into being. The feedback was amazing— even from members of TUG. We were invited to publish our article in TUB and we did. Since then, a lot of time has passed— and the discussion on “How to write an archaeological catalogue with LaTeX” grew. Next to our own example and its evolution and corrections, the Ninja was able to coach another archaeologist with the same problem— and we learned that with the changing material groups of archaeological findings, different solutions must be produced. So, the case studies here are concentrated on archaeological work but forms a prelude to the main focus— teaching LaTeX and empowering people with a non-technical background to learn and use it.

Online education is suddenly more important, from primary schools to research. This talk focuses on beginners learning LaTeX. We learn from what’s already been done for other computer languages.

First I survey how to provide online LaTeX typesetting via a web browser. They are:

  1. LaTeX as a cloud service;
  2. LaTeX running in the browser;
  3. MathJax running in the browser.

Next, what to teach the student, and how. Teaching is not the same as writing a reference manual. It requires identifying core concepts, and presenting them in a helpful manner and order. Students need to test their understanding, and perhaps explore, before moving on. We consider LaTeX from this point of view.

At the school level, the Raspberry Pi Foundation (RPF) is one of the leaders. They offer many projects in HTML, CSS, JavaScript and Python. They have strong connections with students and teachers. They partner with Code Club and Future Learn. They provide many free resources. They run education research seminars. And they design and produce the Raspberry Pi.

Finally, we blend the experience of the RPF with the task of learning and teaching LaTeX, and conclude with some problems, opportunities and challenges (Website).

There are a plethora of resources available to new LaTeX users. However, it is much more difficult to discover which of these provides the best introduction to LaTeX. These online resources vary in quality and correctness: over time, and with limited editing, even good advice can become out-of-date. Many good resources are over-detailed for a new user who needs only straightforward help to get over the initial barrier to using the system.

For many programming languages there are now web-sites providing the opportunity to try coding online using a cloud compiler. These cloud compilers can be harnessed by a range of teaching websites to offer a simple introduction to the language using a suitable IDE (integrated development environment): a good example is LearnPython.org. Such sites tend to be strictly limited in scope as they are not aiming to teach every possible idea in the language but only a strictly-limited ‘Beginners’ menu.

Over the past six months, work has been ongoing in filling that gap for LaTeX: LearnLatex.org. The aim of this new site is to provide a carefully-curated set of resources for beginner LaTeX authors, with integrated use of an online LaTeX environment and demonstrations accessed directly from these lessons. The scope of these learning resources is focussed, and with the aim of offering the material in bite-sized chunks.

In my talk, I will look at the background to developing learnlatex.org, the work done so far, what’s next, and how the team behind the site hope it can grow.

TopTeX (TopAnswers.xyz/tex) is a new site for questions and answers about TeX and friends. It is part of the TopAnswers network, an open source and not for profit project. Its development is focused on the needs of the users and provides a friendly environment for building a high quality repository of knowledge.

In this presentation I will show the “why” and “how” of my database publishing software which is built on top of LuaTeX. speedata Publisher is open source document preparation software which is primarily focussed on product catalogs, data sheets and other documents in which the layout does not follow a simple reading direction such as books and papers.

While a common LaTeX document mixes text and markup, the documents for database publishing usually contain just plain data without any formatting and is encoded in the extensible markup language (XML). So the aim for speedata Publisher is to process the XML source files and produce the high-quality PDF we all know from LaTeX, while being as flexible as possible.

To achieve flexibility in typesetting it is necessary to reinvent the wheel many times, because AllTeX typesetting routines do not fit, in most cases, a non-linear layout. LuaTeX allows the user to manipulate the internal data structures TeX uses for typesetting, so this is the perfect match to use custom routines for layout while having access to the great line breaking algorithm that (pdf)TeX uses.

I will present some use cases for speedata Publisher, show a bit of its built-in programming language (which is also written in XML) and give some technical insights how my LuaTeX code interfaces with the TeX routines. There are no prerequisites for this presentation, but I will dive into some Lua programming. So basic knowledge of programming might help to understand the technical part.

This talk describes the development of a LaTeX package to create a bookshelf image from a BibTeX file, suitable for use as a background for a video call in Zoom, Skype, or similar. Each entry is typeset as the spine of a book with title and author, using a randomly-selected font, color, and size. The talk describes the problems of random choice with both fixed-length and [potentially] endless lists, and the algorithm used to fit the author and title onto the spine. The package is available as ‘bookshelf’ on CTAN for inspection and testing.

In this talk I briefly touch on “where we are coming from” (we being the LaTeX Project Team), “where we are now” and then focus on the LaTeX Project plans for the the upcoming years which will primarily be focussed on providing an out-of-the-box solution for generating tagged PDF with LaTeX and will include gentle refactoring parts of the core LaTeX and providing important functionality, such has extended standard support for color, hyperlinks etc., as part of the kernel.

This is a multi-year journey that we have just started and I will briefly explain the places this will take us through. At its end we expect that LaTeX users are able to produce tagged and “accessible” PDF without the need to post-process the result of their LaTeX run.

Malayalam, the mother tongue of 350 million people of Kerala, southernmost state in India is one of the 22 official languages of India. It is a 1600-year-old Dravidian language; its script is classified as Abugida, or alphasyllabary. It consists of 51 basic characters which combines to form more than 900 conjuncts (ligatures). There are two types of formation of conjuncts in Malayalam, horizontal and vertical. For example, basic characters (tha) and (sa) combine horizontally to form (thsa), whereas (tha) and (sa) combines vertically to form (stha).

There are two parts in vertical conjuncts— above character and below character. An above character part is placed above the baseline, extending to the x-height. Below characters are small in size wrt to the above characters in handwriting as well as in printing. Owing to consonant-vowel pairing below, characters have 12 different heights, constituting different levels, quite unlike Roman fonts. Therefore the height between the baseline and ascender allotted for normal Roman fonts becomes incapable of accommodating below characters in vertical conjuncts. Often it demands twice the height of Roman fonts. Furthermore, this cannot be compensated by taking from the height above baseline since some vowel signs need the full cap-height.

The “New lipi” (alphabet) of Malayalam that appeared in 1970s, and was later popularized in DTP with 140 characters, is a subset of the original script with 900 conjuncts standardised by Benjamin Bailey in 1824. New lipi could easily be accommodated in fonts with Roman dimensions. But when the language campaign named “Rachana” began in 1999 strongly advocating for traditional script in Malayalam computing, the orthography demanded more space below baseline for vertical conjuncts. Ever since the advent of Unicode Malayalam in 2004, the traditional script, popularly known as “old script” is becoming more and more used in web publishing and printing. Designing of fonts of original script adhering to dimensions of Roman fonts puts tremendous pressure on designers. Also. the elastic nature of below characters often gives them different typographic characteristics than above characters. To keep the leading (line space) as small as possible also creates a constraint.

All these problems arise from the specifications for dimensions set for ascent—descent and point size estimates of Roman fonts. Malayalam font makers often find it difficult to oblige “Roman calculations” and are forced to apply different arithmetic. Other Indian scripts with moderate below characters/diacritical marks are lucky in this aspect. Deviating from Roman calculations poses problems while typesetting documents using Malayalam and English fonts together. This talk discusses the unusual geometry of Malayalam vertical conjuncts, the conflicts it creates with usual font dimensions, and probable solutions.

Starting with TeXLive 2020, LuaLaTeX uses luahbtex instead of luatex and therefore allows the use of HarfBuzz instead of the ConTeXt-derived font shaper.

This presentation tries to answer some of the most important questions about this change, e.g.,

Javier is the current maintainer and developer of Babel, a core LaTeX package originally written by Johannes Braams and the first major effort in the I18N of LaTeX. Javier has also written or contributed to many other packages for print production. He will be interviewed on camera by Paulo Ney de Souza and you are welcome to submit questions to him in advance via email to the conference committee tug2020@tug.org. Video of interview.

This talk discusses how teaching concepts (like didactical reduction) and common modes of knowledge transmission (such as references and documentation) could be reconciled in AllTeX teaching contexts. The problem: Informal knowledge about how to act in a learning setting and how to procure knowledge one needs is often linked to different forms of privilege. Using references to acquire knowledge already requires a relatively big amount of tech literacy or familiarity with the medium. People with non-tech backgrounds often lack this tacit knowledge. This talk proposes ways of providing different sorts of AllTeX education to different types of learners, using the efforts made on the LaTeX Ninja blog as an example.

One of the strengths of LaTeX is the wide range of available packages. But users at any level can find a paradox of choice: they have a task to accomplish and they are not sure which packages will do it reliably, and are up-to-date.

I’ll present a curated list of packages that seem to cover most of the things that such users want to do. The criteria are to name packages that are reliable and widely used, with only one package per area, and with sufficient documentation that a beginner can at least get started on. I’ll solicit help both on the packages chosen and on the most useful application areas, for example, coverage of pdfLaTeX vs. XeLaTeX vs. LuaLaTeX.

When universities and other schools closed campuses to help reduce the spread of coronavirus, many professors and teachers quickly adapted to online teaching by necessity. Likewise, students adapted to online learning and found ways to collaborate with peers while following social distancing guidelines. Overleaf, an online writing platform for TeX, has proven helpful in many of these scenarios. This talk will provide an overview of how Overleaf can be used most effectively in an education context. Topics include: how to effectively organize projects; suggested workflows for sharing assignment templates and receiving completed assignments; using Overleaf’s reviewing tools to collaborate and provide feedback on assignments.

Philip is the author of Biber and current developer and main lead of BibLaTeX which made programable citations and bibliographies a reality. He will be interviewed on camera by Paulo Ney de Souza and you are welcome to submit questions to him in advance via email to the conference committee tug2020@tug.org. Video of interview.

Successful tagging within PDF files generated from LaTeX source encourages a change in viewpoint on the nature and intent of the LaTeX coding. We do this by looking at how LaTeX coding can be used to capture the semantic meaning of textual snippets, through the use of user-defined macro-names and environments, even when there need be no extra formatting required. An example document, using such tagging principles, is the Presentation Advice document (PDF version) prepared for this Online conference.

Other advanced tagging aspects, requiring some degree of author input, are discussed in the paper. We also show how alternative views can be exported from a fully tagged PDF, into formats such as XML, HTML and Word documents. Hyperlinks, both internal (as ‘structure destinations’) and external (with xlink attributes), can be preserved in the exported view.

There are two parts to this presentation. Firstly Tom Price will describe a bundle of LaTeX files designed to build PDF/UA accessible documents from LaTeX sources using a pdfLaTeX engine. The bundle takes full advantage of the capabilities of the pdfx.sty and tpdf.sty packages while requiring minimal effort on the part of document authors.

Next, Ross Moore, author of pdfx.sty and tpdf.sty, will discuss how tagging can be achieved within a diagram created using tikz.sty package methods. It is becoming increasingly common to encounter images built this way, so it will be necessary to tag the information in these, so that it becomes accessible to readers with visual disabilities. Ross will demonstrate some promising first steps in this direction; in particular for a ‘SWOT analysis’ diagram.

Create PDF output from complex XML input to meet expectations of global publishers.

TeX, LaTeX, and their friends have a long history and a number of related software have been developed, including variations of TeX engines, DVIware, and supporting programs such as BibTeX, makeindex, and their alternatives. Thanks to such a rich ecosystem, we have numerous options for TeX workflow to create a document. However, on the other hand, now it is hard or even impossible in some cases to know what is the ‘right’ workflow to process a document only by seeing a *.tex file. This fact keeps the portability of *.tex files low.

The light LaTeX make (llmk) is yet another build tool specific for LaTeX documents, which intend to make the portability of documents higher. This is a simple program and will not replace (GNU) make or other richer build tools designed for complicated LaTeX documents. Its aim is to provide a simple way to write down a workflow for document authors and encourage people to always explicitly show the ‘right’ workflow for each document.

The design of llmk is to give primary consideration to portability. It requires texlua only so that it will work under any environment which has LuaTeX. Users’ default settings are not available because they would destroy portability. Instead of providing a feature for personal settings, it tries to provide a convenient default configuration so that people can easily write workflows for ‘typical’ simple LaTeX documents.

TeXLive 2020 has seen the usual bunch of fixes and new version, but there is one change we have been working on for a long time, the renaming of containers to include the revision. The main aim of this change was to make life of distributors who rely on unique names easier. We will report a bit on the necessary changes and its implications.

The second part of the talk will briefly introduce the TeX related services at TexLive.info.

Day Three

Overleaf is an online collaborative editor for LaTeX. It produces PDF documents using a full TeXLive installation to compile projects authored by its users. Every year, when a new TeXLive version is released, it needs to be integrated in Overleaf without breaking existing projects that worked with previous TeXLive versions. This talk will explain how this is done. We will also take the opportunity to look at how the compilation service works.

When it comes to editing PDFs in web applications or IT systems, modern developers are offered a wide and disconcerting array of choices: numerous HTML to PDF converters (e.g., WkHtmltoPdf), inumerable native instruction-based PDF generators (e.g., PDFkit) or less obvious headless calls to office software (e.g., LibreOffice). Several cloud providers (e.g., PDFlayer) have packaged these features behind ready to use HTTP APIs.

However great and full-featured these options are, the result often come short in regard to the typesetting and overall typography quality of the documents. Why then is LaTeX not used more frequently by developers? Isn’t LaTeX after all the leading free software for getting well-typeset PDFs?

I argue in this talk that while many tooling exist, the barrier to entry is still too high for the everyday modern webapp developer to adopt LaTeX. Next, I present Latex-on-HTTP (YtoTech), my attempt at making LaTeX as straightforward to use in web applications as any other cloud service, e.g., sending an email with a cloud platform. The presentation will show how far we have got, demonstrate how to use LaTeX from an HTTP API in modern web applications (in a quotes-and-invoices use-case), and what is still missing to make Latex-on-HTTP and LaTeX a go-to solution for such usage.

The demo code used during the talk is available here.

Where are we? How did we get here? What’s the future? I’ll try to answer these questions, by looking outward in both space and time. Here are some ideas.

Don Knuth started TeX in 1977. The present version is a direct descendant of the 1982 version. Only at the end of the 1980s did hard disc drives cost less than $10 a megabyte. Today, for about $100 I can buy a pocket computer that connects to a ubiquitous network. It has gigabytes of solid state storage. It fits in my pocket, more easily than a book.

Today software that implements Don Knuth’s wonderful mathematical typesetting algorithm and fonts can be downloaded for free. And this software installs itself, and is interactive. In real time it previews LaTeX-encoded mathematics, as I type it.

My monitor is 40 inch, 3840 x 2160 with 24 bit color. It’s no more expensive than a TV. (In fact, it is a TV.) It cost about $450. It’s not my virtual desktop. It’s my vertical desktop, about the same size as my horizontal desktop.

As often as not, when I read beautifully typeset mathematics, it’s on my vertical desktop, as part of a web page. The only time I really need a PDF is to send a file to be printed. So that I can put it on my horizontal desktop, and write on it with a pen.

And the interactive mathematical typesetting software. It’s built on HTML5, and it’s called MathJax. And the future of TeX and LaTeX and our community. To succeed, we have to change, and also keep things the same. Linus Torvalds did something much the same with Unix, to create Linux.

By the way, my wonderful pocket computer that typesets mathematics. It’s also known as a mobile phone, and it runs the Linux kernel.

The video of the lecture is availale on YouTube.

The aim of this talk is to present a subset of LaTeX, quite adequate for writing class notes and the like, which has a defined grammar and can compiled on the fly to HTML. We will briefly discuss what goes into the compiler: grammar, parser, renderer, some optimizations — all written in Elm, a language of pure functions — then demonstrate how this technology can be used to create, edit, and disseminate documents. Website: MiniLaTeX.io.

The HINT file format is intended as a replacement for the DVI or PDF file format for on-screen reading of TeX output. This presentation gives an overview of the current state of the project and solicits answers to open questions that might influence further development. The project web page is HINT.

TeX’s math mode always had support for extensible delimiters which can grow according to the content delimited by them, but these were always based on vertically stacking repeated parts. While this can often provide decent results, it leaves a lot to be desired, especially for round or angle brackets. Given that LuaTeX includes, with luamplib, a MetaFont derived system, it should be possible to dynamically instantiate meta fonts with exactly the right delimiter size without any restrictions regarding their composition. This presentation shows one implementation of this which builds on MetaType1/AlgoType in order to output high quality and potentially even fully hinted, vector glyphs which properly integrate into a modern LuaTeX document.

What would an algorithm look like that improves on the TeX typesetting system’s scheme for breaking a book’s text into pages? This talk explores a new Python library for high quality typesetting that I been crafting, and that I have already used to format and print a short-run hardback book!

TeX was developed as a way of communicating mathematics; it has been very successful for that and much more. But TeX did not completely dominate publishing, though it much expanded the community able to write mathematics directly. MathML (Mathematics Markup Language) was specified as a markup for mathematics in the W3C (World Wide Web Consortium) context; it is both officially part of the web's basic HTML and an ISO standard. The idea that there should be a Global Digital Mathematics Library (GDML) is an obvious one. There's an International Mathematical Knowledge Trust (IMKT) devoted to eventually realizing a GDML, growing out of efforts by the International Mathematical Union. Some of how the present situation came to be and what's evolving now will be examined.

Given that the universal format-to-format translator Pandoc is coming of age, LaTeX authors are tempted to think that whatever LaTeX they write can quickly be translated without worry to whatever other format may be required.

Of course, that is not exactly true, but the use of an XML profile of LaTeX can make it exactly true. However, an SGML profile of LaTeX can provide closer emulation of classical LaTeX than an XML profile.

Most actors in the world of markup have restricted their use of SGML to XML. For that reason software that handles SGML beyond the realm of XML seems to be falling out of maintenance. If the LaTeX community wishes to continue to be able to avail itself of the advantages of SGML for LaTeX source emulation, it may fall on the LaTeX community to maintain the extant SGML libraries.

I will give an overview of the document conversion program pandoc, with an emphasis on how it might be useful to people who are already comfortable using LaTeX to prepare documents. In the first part, I’ll discuss the use of pandoc to convert between LaTeX and other common formats, including Microsoft Word docx and HTML. In the second part, I’ll give some reasons why even a seasoned TeXnician might want to consider writing documents in pandoc’s extended Markdown instead of LaTeX, and I’ll teach some tricks that can be used to recover the tremendous power and flexibility of LaTeX in this simpler idiom.