How to cite this paper
Sperberg-McQueen, C. M. “Seeing things whole.” Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8, 2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). https://doi.org/10.4242/BalisageVol13.Sperberg-McQueen02.
Balisage: The Markup Conference 2014
August 5 - 8, 2014
Balisage Paper: Seeing things whole
C. M. Sperberg-McQueen
Founder and principal
Black Mesa Technologies LLC
C. M. Sperberg-McQueen is the founder and
principal of Black Mesa Technologies, a consultancy
specializing in helping memory institutions improve
the long term preservation of and access to the
information for which they are responsible.
He served as editor in chief of the TEI
Guidelines from 1988 to 2000, and has also served
as co-editor of the World Wide Web Consortium’s
XML 1.0 and XML Schema 1.1
specifications.
Copyright © 2014 by the author. Used with permission.
Abstract
Sometimes we need to focus on the trees, or the leaves on
the trees. Sometimes we need to focus on the forest.
My topic this afternoon is seeing things
whole
.
The German philosopher Leibniz says several
times that if you really correctly know any
thing, you know all of its relations to other things, you know
all of the predicates which apply to it in much the same way
that knowing a number entails knowing all of its prime
factors.
(This is more than just a vague analogy. In some sketches for
what he called a characteristica
universalis or universal writing
system, Leibniz proposed to use prime numbers to
encode properties and the product of prime numbers to encode
concepts formed by the conjunction of those properties.)
In any case,
according to Leibniz, perfect knowledge of anything entails
knowledge of all of the predicates true of that thing.
He suggested that it might be possible to exploit this fact
by developing a universal writing system in
which signs for complex concepts are built up by conjoining the
signs for the primitive concepts that go together to make up the
complex concept and that with a writing system like that, logical
inference becomes much more straightforward, almost trivial, like
arithmatic calculation — just as checking the divisibility
of one number by another is straightforward, even trivial, if you
have identified the prime factors of the two numbers.
This sounds hopelessly naive, over-optimistic, rather
old-fashioned. (But let’s remember we are talking about
Leibniz here. He also had an idea that if you could just get a
plausible notation for concepts of infinitesimal change, you
might be able to solve problems of motion and acceleration which
had thus far — for millenia — eluded solution. And
you know what? On that one, he was right, and we use his notation
to this day in differential and interal calculus, which you might
recall he invented at the same time as and independently of Newton.)
And as for being old-fashioned, well, Ludwig Wittgenstein,
often regarded as one of the preeminent philosophers of the
20th century, argued something very similar in his
Tractatus where he writes:
2.0123 If I know an object I also know all its possible occurrences in states of affairs.
(Every one of these possibilities must be part of the nature of the object.)
A new possibility cannot be discovered later.
2.01231 If I am to know an object, though I need not know
its external properties,
I must know all its internal properties.
2.0124 If all objects are given, then at the same time all
possible states of affairs are also given.
2.013 Each thing is, as it were, in a space of possible
states of affairs. This space I can imagine empty, but I
cannot imagine the thing without the space.
— Ludwig Wittgenstein, Tractatus
Logico-Philosophicus, trans. D. F. Pears and B. F. McGuinness (London: Routledge and Kegan Paul, 1974),
6-7
This line of argument ties in with Wittgenstein’s
argument that the relation of logical consequence cannot in
principle provide us with new information and thus cannot in
principle surprise us. And for similar reasons, he argues, logic has
no need of an identity symbol
since the identity of any thing we
have sucessfully identified must be trivial and visible on its
face.
His argument is perhaps not assailable, but it is difficult
to assail because he was a pretty smart guy and he didn’t
leave many angles of attack. But it does seem pretty clear that
for most of us, logical inference does have the capacity to
surprise us and therefore that we live in a rather different
psychological world from the one suggested by Wichtenstein, if we
had any doubt that he lived in a rather different world from the
one most of us inhabit.
On the other hand, it does seem reasonable to believe that
if we are surprised by a logical
inference from some thing or state of affairs, then after
we’ve made that inference, we understand that thing or that
state of affairs better than we did before, so perfect knowledge
— even though we don’t have it yet — we may
infer would involve no longer being able to be surprised because
we understand the thing or the state of affairs so thoroughly that
we grasp all of its logical implications and are beyond
surprise. If we can as a rule be surprised by discoveries,
it is because as a rule our knowledge of things is incomplete
and imperfect.
This is a bit like saying that a perfect understanding of an
elephant involves understanding its rope nature and its snake nature
and its wall nature and so forth, or equivalently, that perfect
vision of a forest involves seeing and knowing not just the forest
but all of the trees and all of the leaves on all of the trees.
That’s an awful lot for us to keep in mind, but that’s
what perfect knowledge would be like.
But since in fact we don’t have perfect knowledge,
we don’t always immediately see all
the implications of a given proposition or set of propositions.
It’s very helpful under these circumstances
to have tools to work through some set of
premises or some given data and calculate some of the implications
we’re interested in. And this is particularly so when
we’re dealing with complex systems made from many, many small
parts intricately interrelated.
An outstanding example that we’ve seen here this week
are the streamability rules of
XSLT 3.0, which are such a complex system of many, many things
intricately interrelated. The streamability tool described on
Tuesday by John Lumley is useful in part because it does calculate
out the implications of the constructs of the data that we give it
Lumley. The rules for writing streamable
functions outlined by Abel Braaksma in his talk on Wednesday also
provide a valuable thread through that particular labyrinth Braaksma.
XSLT streamability is, of course, not the only complex
system we have seen here this week. The National Information
Exchange Model (NIEM) described by Priscilla Walmsley on Wednesday
Walmsley and again this morning by Betty
Harvey Harvey is another very complex system
with many moving parts. I think the lessons drawn in
Priscilla’s talk from implementation experience should be
valuable to people working in very different projects, and the
techniques that Betty described for describing extracts from NIEM
and making them accessible and understandable to the people who
have to work with them should also be very widely
applicable.
It’s not just
specifications or vocabularies that can consist of many small parts intricately related. If
so, we could say, oh, well, the Working Group just didn’t
succeed in boiling it down. Sometimes it’s our own
documents. If you take almost any conventional document and you
try to tease out the declarative or, for that matter, the
procedural meaning of its markup by transcribing it in RDF or
First-Order Predicate Calculus (FOPC), we are apt to see a huge
explosion in volume. Sometimes, as in the EARMARK system described
by Francesco Poggi yesterday Peroni, Poggi, and Vitali, or in
the use of First-Order Predicate Calculus described by Yves
Marcoux and myself Sperberg-McQueen, Marcoux, and Huitfeldt, this
explosion can allow us the use of new tools for working with the
information, but it does make for an awful lot of distinct objects
to focus on. You will recall that John Lumley observed that in
the streamability analyzer, one line of XSLT not infrequently
turns into about a kilobyte of information in the browser Lumley. And as he said, You can look at all
this in any XML editor, but then you can’t see the wood for
the trees.
A desire to see the wood, and not just the trees, motivates
what has historically been a strong interest in tools for
controlling complexity and for hiding complexity from ourselves.
The application of styling is one such tool, illustrated again by
John Lumley in the streamability tool and his
encore presentation on dynamic documents. Josh Lubell’s
XForms interface for small arcane non-trivial data sets is a way
to make things easier to grasp, easier to grok Lubell. Betty Harvey’s documentation of
profiles of complex vocabularies falls into the same
category Harvey.
Another tool that we use and cultivate is declarative
programming, illustrated here by Anne
Brüggemann-Klein’s work with her students on a computer
game using almost exclusively declarative XML-based technologies
Sayih, Kuhn, and Brüggemann-Klein, Josh Lubell’s work
again Lubell, and Han-Jurgen Rennau’s
work on the declarative description of tools and, from those
declarative descriptions, the generation of consistent user
interfaces and so forth Rennau.
A third tool that we use to control complexity is
validation. Validation controls complexity in a slightly
different way: it simplifies the population of inputs that
we have to think about. If we know that the input conforms to a
schema, we may know, for example, that every chapter is going to
have a title, and that means our code for handling chapters or
chapter titles does not need to say Oh, wait, first of all,
does my input look as I expect it to look?
Given a chapter, do I actually have
a title?
The stripping out of those now unnecessary
conditionals amounts to a huge reduction in volume and a huge
simplification of the programmer’s task.
Validation is not by any means a solved problem. Anne
Brüggemann-Klein’s work on identity constraints Brüggemann-Klein, Maalej, and Sayih and Eliot Kimber’s continuing quest
for good ways to validate DITA Kimber and Bina
provide evidence of that.
We also try to tame complexity by bullding more powerful
tools, or more convenient tools. The work reported on by Benito
van der Zander is a good example of this: simple patterns that
allow us to do very complicated things in a very simple way van der Zander. Or consider Pat Case’s work on
XQuery full-text Case; Pat observed that one
of the big advantages of XQuery full-text is: You do not
have to define search fields; you can search on all your
data.
I think of that as another way of saying you can
view your data whole; you can see things whole. One compelling
argument for more powerful query tools or extraction tools was
formulated here by David Lee: The bigger the data, the more
you need less of it.
Lee And of
course, broader dissemination of existing query tools falls into
this category too, as described by Cliff Anderson just a few
minutes ago Anderson.
A fifth approach to controlling complexity lies in achieving
greater uniformity — this is less of a
tool and more of a practice. It is illustrated by the topic
tools described by Hans-Jürgen Rennau Rennau and by Steve DeRose’s work
on providing consistency and uniformity of interface between the
XML accessors in a given language and everything else in that
language DeRose. It’s a question of
context: Is it more important to have all of the XML accessors
look the same across languages, or is it more important to have
them look like the rest of the language? Steve suggests, I think
quite rightly, that programmers are more apt to care about
consistency within a program and across their experience with a
given programming language than they are about consistency between
programming languages. One of the reasons we have different
programming languages, as Wendell [Piez] suggested in a discussion
a little while ago,
is that they’re good for
slightly different things. We don’t want perfect
consistency across programming languages. That would be
boring.
David Lee’s programmatic desire to generalize and
extend XPath is also seeking a kind of uniformity Lee. They say that if all you have is a hammer,
then everything looks like a nail. What I understand David to
be saying is, when all I want to use is XPath, then I want
everything to look like a node. And he’s right.
Everything that we can make look like a node is something we can
work with conveniently in XPath — and if we really want
world domination (I’m not sure we do), those of us who
are interested in world domination should take that to
heart.
Kurt Cagle’s suggestion that devices can describe
themselves and their statuses and their capabilities by using a
uniform RDF interface is another attempt to reduce complexity
by achieving uniformity Cagle.
There’s another way, if not to
control complexity, then to
reduce its impact. It’s
not exactly a tool; maybe it’s a technical fact; maybe
it’s just a psychological fact. But we can reduce the
apparent importance or difficulty of complexity by zooming out, by
considering things in a larger context.
Taking things in a larger context, taking a wider view,
taking a longer view has a number of advantages. It can help us
to look past superficial differences of opinion and see strengths,
advantages, utility, good work in other people, other
technologies, other groups, even when they refer to things
we’ve been working on for a long time as pet
projects
. (There was much more than this one point in
Robin Berjon’s talk on Monday, but that was an important
take home from it Berjon.)
Taking a broader view can help us see there can be multiple
paths to the same goal. Alex Miłowski showed us on Monday
how some features of HTML5 can help us to achieve some things in
the browser that some people had hoped to achieve using XML
support in the browser, but which never quite materialized out of
the XML support in the browser Miłowski.
But if we can achieve them now, does it matter whether the
features are called XML support
or something
else?
Phil Fearon showed us how to work both sides of the street
by processing XML as HTML5, and HTML5 as XML Fearon. (I did notice that the list of
technologies he deployed was quite long, and I began to think that
this might be another case of many, many things intricately
inter-related.) But sometimes that’s what you have to do
to make things work.
And we should all take to heart his memorable remark:
There is no reason to choose one or another.
We can
choose both. That’s nice to be able to do. Alex
Miłowski (there he is again!) and Norm Walsh showed more of
the same direction on Wednesday Miłowski and Walsh.
And, as I said in the discussion yesterday, it looks as
though those of us who are interested in overlap had better start
learning SPARQL and the other technologies used by the EARMARK
team Peroni, Poggi, and Vitali, because the work being done in
Bologna is changing the landscape of that discussion and
it’s going to get harder and harder to take any
consideration of overlap seriously that doesn’t engage with
that work.
Another excellent example of taking a broad view and an
example of the kind of consideration of a wide variety of factors
was given by the paper on the Recommended Formats of the Library
of Congress as presented Monday by Ardie Bausenbach and Kate
Zwaard Bausenbach and Zwaard.
Taking a broad view helps us understand that no single point
of view serves everyone. If you needed another example of that, I
thought the MathML panel on Wednesday showed that very well Dineen et al..
And if you take a sufficiently broad view, you end up with
sort of panoramic views of whole industries and vocabulary
developments, and a real focus on the bigger picture, as
illustrated by David Webber’s talk this morning Webber.
There’s another way to
zoom out, and that’s to take a historical view. Steve
DeRose’s talk about hypertext and the history of hypertext
theory and practice on Monday DeRose (symp.), and
Liam Quin’s talk about markup systems on Wednesday Quin, both illustrate to us the general principle
that seeing things in historical depth can sometimes help us to
get them in focus. For many of the things that we would like to
do, it helps to remember that a lot of people before us may have
tried to do them. And (as Lauren Wood remarked, while putting
up a copy of the old DOM Ranges spec as a poster after listening
to some of the early talks),
even if we don’t see ourselves able
to adopt their solutions wholesale, there may be things we can
learn from them. The past, said an English novelist, is like a
foreign country; they do things differently there.
That means that even if they were trying to do something that we
are also trying to do, they may have had a slightly different
point of view, and we can learn from that difference in point of
view.
If we take a broad view of the possible applications of
our data, we may find ourselves led to the kind of rich markup and
intelligent use of it for a broad variety of applications that
were illustrated this morning by Joe Wicentowski’s talk
about the use of XML in the State Department’s Office of
the Historian Wicentowski. Rich views of
data and the reuse of information for different purposes have
always been important in the use of descriptive markup; they are
in large part why we have the concept of descriptive
markup.
One particularly difficult aspect, technically speaking, of
the quest for a really rich and informed view of any complex data
is the requirement, or desire, to capture and expose something
about its development over time, a perennial topic of concern. I
think we saw some progress here this week in the proposal
discussed by Robin La Fontaine La Rontaine,
and in Ari Nordström’s account of a custom version
management system Nordström, and also in a
slightly different vein in Josh Lubell’s discussion of
ways to display overlays over complex technical documents Lubell.
But perhaps the most striking example of a proposal for a
richer account of our information was the proposal made by Domenic
Denicola during the symposium on HTML5 and XML on Monday, with his
discussion of the development of web components Denicola. He reminded us in what can be a salutory
way that even when we provide an elaborate vocabulary definition
document, be it a DTD, or an XSD schema, or a RELAX NG schema, or
a big wad of documentation, XML data doesn’t in itself
do anything. I like things, he said, that
do things.
And it’s clear, as he argued on Monday and as Alex
Miłowski and Norm Walsh showed on Wednesday Miłowski and Walsh, that web components will make it far
easier to do interesting things with user-defined markup. But I
have to confess that, I think that if we want to see things
whole, and particularly if we want to see things in a historical
light, we may want to take a slightly different view of what is
happening here.
In the 1960s and the 1970s, those involved in the initial
efforts that led to the concept of generic markup took the view
— or so at least we might more of less fairly infer from
the work they did and the documents they produced — that an
external stylesheet language that would allow us to specify the
font and the layout and other typographic characteristics of any
unknown element that we might provide to a processor, or that
might appear in the input to a processor, would constitute an
adequate provision for the semantics of any user-defined markup in
a document. If we could provide that typographic information,
then user-defined elements could have the same
capabilities as those supported by the intrinsic semantics of the
application.
Now in the 1990s when CSS was being developed, that earlier
view began to look a little naive, because there are properties
that we need in order to specify the display of an element in a
browser that had not been foreseen in the work of the GenCode
committee or Working Group 8 of Subcommittee 18 of Joint Technical
Committee 1 of ISO and IEC (ISO/IEC JTC 1/SC 18/WG 8). And also
in an interactive application, the CSS developers reminded
everyone, it’s very helpful to keep the selectors in a
stylesheet rule (what some people speak of as the left-hand side
of the rule) as simple as possible because the more complex the
selectors are, the slower the application of the stylesheet to the
data becomes. And that’s really important in an
interactive application like a web browser. So both the
right-hand side and the left-hand side of the rules needed to look
different from what the earlier work had expected. An adequate
specification of the semantics of an element required a rule or
set or rules in a CSS stylesheet or set of CSS stylesheets, not
just the kind of semantics that had been foreseen earlier.
The developers of CSS, on the other hand, did not foresee
Web 2.0 or later developments. So web components now are being
designed in part around the insight that more is needed to specify
the semantics of an element than a set of CSS properties, and to
fill that gap by
providing a way to specify an interface between the element and
the rest of the browser, thus finally enabling the markup designer
to describe the full semantics of non-HTML elements in a way that
puts them on the same footing as elements intrinsically understood
by the browser.
That, at least, is the goal. And this time, surely,
we’ve got it right. Surely we’ll never be around
this track again. There is no danger at all surely that the
primitive semantic notions of web components might strike future
observers as naive, or that they might fail to foresee the future
in the same way that the failure of CSS1 to include hyperlinking
behavior, for example, might strike some observers (both now and
then) as naive.
This is the last time around this track; once we’ve
gotten this round of development done, we’ll be finished
and go on to other things. Right? Maybe.
Now, some of you know that the account I gave a moment ago
of the thinking behind CSS and the thinking behind earlier efforts
toward generic markup was not at all accurate. (And I’d
like to thank you now for not shouting out to interrupt me while I
was telling the story.)
Because the developers of the notion of generic markup, as
far as I know, never actually held any of the views attributed to
them in the story I just told.
It is true, at least according to what I’ve been told
(I wasn’t there) that many of the pioneers of generic
markup initially came together with the goal of defining a single
markup language useful for all
documents, so that publishers could code their documents in this
generic coding, or GenCode, and be done with it. But by all
accounts, they saw very rapidly that no single tagset would
suffice or could possibly suffice for the applications they could
easily foresee, and they knew quite well that there would be
applications that they could not foresee.
So they didn’t spend a lot of time arguing over the
merits of including this or that element and specifying its
typography this way or that way. They didn’t spend any
time at all on a universal tagset because, to borrow a phrase from
Anne Brüggemann-Klein, they realized that it
didn’t really matter, because the whole concept was doomed
anyway
.
I have gotten the impression from conversations with some
members of the relevant working groups that in fact they used to
remind each other from time to time during meetings that the
markup languages, or meta-languages, they were developing must not
assume that the only use of markup would be to mark up manuscripts
for typesetting, that people must be able to use them for
arbitrarily unusual, even arcane, topics like security controls
(of the kind that Josh Lubell was talking about yesterday Lubell), or interactive games (of the kind Anne
Brüggemann-Klein showed us an implementation of Sayih, Kuhn, and Brüggemann-Klein), or even — ne plus ultra
of arcane and unusual information — the kind of linguistic
of literary and linguistic analysis that Wendell Piez showed us in
his visual tour de force the other day Piez.
When the Text Encoding Initiative came along and actually
did define an SGML vocabulary for literary and linguistic study
among other things, the reaction was unforgeable. They clearly
were elated, even moved, by the thought that they had prophesied
such an application of descriptive markup, and that their
prophecy had come true.
It must be a remarkable feeling to try, more or less blindly,
to ensure that the system you are building is usable for
applications you are not in a position to foresee,
and then to have someone say,
What you were trying to make possible, you did make
possible.
Now, it’s true that what most of those early adopters
and promoters of descriptive markup wanted was applications whose
operational semantics were typographical. But
they did not confuse the semantics of markup with the operational
semantics of that markup for one particular application in one
particular environment. At most, a stylesheet rule — or a
Web component interface specification — specifies the
operational semantics for one application of the tagset in one
environment. It is not the creators of SGML who were naive in not
specifying operational semantics, or even in not specifying (at
the meta-level) a language for specifying operational
semantics. It is anyone who believes that any concrete
specification of operational semantics can ever constitute a full
specification of the meaning of markup, especially user-defined
markup.
There is a reason that SGML has given rise to applications
that have lasted for decades and has been applied in areas that
its original developers foresaw only vaguely or not at all, and
TeX (for example) has not. The reason is that TeX came with a
perfectly well-specified semantics. What TeX markup means is
ink on paper here
);
it’s very good of its kind, but it does not countenance
application to sonnet structure. It does not countenance
application to markup to distinguish a Leibnizian view of a monad
and distinguish it from other markup that identifies a Russellian
view of the same
phenomena.
Domenic Denicola was not wrong to like things that do
things.
Denicola We all do. But the
designers of SGML stayed away from operational semantics for a
very good reason: they liked things that do more than one thing.
They liked things that do things today and will continue to do
things tomorrow in a different environment, a decade from now, a
century from now. They took the view that has been expressed here
several times in discussion by Steve DeRose and Eliot Kimber
and others: once you have named
something, then you can do things with it.
Antoine de Saint-Exupéry is often quoted as saying that
the designer has achieved perfection not when there is
nothing left to add, but when there is nothing left to take
away
. Those who gave us the concepts of descriptive
markup invented many things, but one of the most important virtues
in their design lies not in what they put into it, but what they
kept out of it. Descriptive markup does not do anything, and that
is its strength.
But seeing things whole also means
looking past the surface a bit. I disagree with Domenic
Denicola’s view of what it means to define the semantics of
user-supplied markup Denicola. But on Monday
Robin Berjon described a choice which he said had been faced at
some point by the SVG Working Group Berjon:
Which do you want? To be pushing a particular syntax? Or to have
cool graphics on the Web?
We face perhaps a similar choice: What do we want?
Do we want to be recognized as having had, at some point in
the past, ideas similar to those being pursued by others today?
To get credit for the goals and aspirations of the members of the
SGML on the Web working group?
Or do we want to have web browsers that support user-defined
markup in a system with consistent separation of content from the
specification of processing semantics?
What does it matter, in the long run, whether those who are
specifying web components and building them into the
infrastructure of the Web believe that in doing so they are
fighting with and overcoming the baleful side-effects of XML; or
recognize (as I would see it) that they’re helping carry on
and carry out the program and achieve the goals that motivated the
creators of XML and related technologies ten and fifteen years
ago, and before that, the creators of SGML and its sister
specifications thirty and more years ago, and before that, the
propagandists for GenCode of forty and fifty years ago?
Can we take yes
for an answer? Even if it
involves looking past the fact that those providing the answer
believe (wrongly, as we may think) that they’re answering
no
?
One key to understanding, Leibniz told us long ago, relies
in understanding the true nature of things and not letting
ourselves be misled by superficial properties of language or by
the names that we or other people have chosen to use for
things.
Leibniz hoped that it would be possible to reduce complex
notions unambiguously to particular conjunctions of primitive
notions, or basic notions, in just the same way that each number
can be reduced unambiguously to a unique list of prime factors.
Three hundred years later we may not find that proposition so
obvious or even plausible. We may not believe that all interested
parties are likely to agree on how to reduce complex notions to
more basic ones. We may not even be confident that a complex
concept has a unique decomposition into primitive concepts,
analogous to the fundamental theorem
of arithmetic.
A simple example is given by the definition of trees in
graph theory. In treatments of graph theory,
we can find a variety of definitions of tree
.
There are eight listed in the textbook on graph theory I consult
most often, and it is an exercise for first-semester students of
graph theory to prove they are equivalent, i.e. that they denote
the same set of mathematical objects. But none of them is
reducible to the other.
Leibniz’s dream would have made it possible for us to
learn all about elephants by careful study of the character used
in his philosophical language to denote elephants
.
We could have learned all about the salient characteristics of
forests by studying the character used to denote
forests
.
We no longer share Leibniz’s hope.
We will not learn about elephants or forests by studying the
words elephant
and forest
. Instead,
we need to get together in physical proximity to other people who
have seen different parts of the elephant, or different parts of
the forest, exchange information, and discuss how to fit all that
information together.
Indiviudally, we see parts of things.
When we come together, we can come closer to seeing things
whole: forests, trees, leaves, and all.
That’s what conferences like Balisage are for.
Thank you for coming to Balisage. See you all next
year.
References
[Anderson] Anderson, Clifford B. On Teaching XQuery to Digital Humanists.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Anderson01.
[Bausenbach and Zwaard] Bausenbach, Ardie, and Kate Zwaard. The Library of Congress Recommended Format Specifications.
Presented at Symposium on HTML5 and XML, Washington, DC, August 4, 2014. In Proceedings of the Symposium on HTML5 and XML. Balisage Series on Markup Technologies, vol. 14 (2014). doi:https://doi.org/10.4242/BalisageVol14.Bausenbach01.
[Berjon] Berjon, Robin. Mending Fences and Saving Babies.
Presented at Symposium on HTML5 and XML, Washington, DC, August 4, 2014. In Proceedings of the Symposium on HTML5 and XML. Balisage Series on Markup Technologies, vol. 14 (2014). doi:https://doi.org/10.4242/BalisageVol14.Berjon01.
[Braaksma] Braaksma, Abel. In pursuit of streamable stylesheet functions in XSLT 3.0.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Braaksma01.
[Brüggemann-Klein, Maalej, and Sayih] Brüggemann-Klein, Anne, Mustapha Maalej and Marouane Sayih. Identity constraints for XML.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Maalej01.
[Cagle] Cagle, Kurt. Semantics and the Internet of Things.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Cagle01.
[Case] Case, Pat. When 57,300,000 Full Text Search Results Are Just Too Many.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Case01.
[Couturat] Couturat, Louis.
La Logique de Leibniz d’après des document inédits.
Paris: Félix Alcan, 1904.
[Denicola] Denicola, Domenic. Non-Extensible Markup Language.
Presented at Symposium on HTML5 and XML, Washington, DC, August 4, 2014. In Proceedings of the Symposium on HTML5 and XML. Balisage Series on Markup Technologies, vol. 14 (2014). doi:https://doi.org/10.4242/BalisageVol14.Denicola01.
[DeRose] DeRose, Steven J. JSOX: A Justly Simple Objectization for XML: Or: How to do better with Python and
XML.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.DeRose02.
[DeRose (symp.)] DeRose, Steven J. What do we still lack? Or: Prolegomena to any future hypertext system.
Presented at Symposium on HTML5 and XML, Washington, DC, August 4, 2014. In Proceedings of the Symposium on HTML5 and XML. Balisage Series on Markup Technologies, vol. 14 (2014). doi:https://doi.org/10.4242/BalisageVol14.DeRose01.
[Dineen et al.] Dineen, Scott, R. Alexander Miłowski, Kennett Rawson and Lauren Wood. MathML: Technology and practice.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.MathMLPanel01.
[Fearon] Fearon, Phil. Practical Processing of HTML5 as XML and XML as HTML5.
Presented at Symposium on HTML5 and XML, Washington, DC, August 4, 2014. In Proceedings of the Symposium on HTML5 and XML. Balisage Series on Markup Technologies, vol. 14 (2014). doi:https://doi.org/10.4242/BalisageVol14.Fearon01.
[Harvey] Harvey, Betty. Methodology For Providing National Information Exchange Model (NIEM) Model Understanding
to XML and NIEM Novices.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Harvey01.
[Kimber and Bina] Kimber, Eliot, and George Bina. RELAX NG and DITA: An Almost Perfect Match.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Kimber01.
[La Rontaine] La Fontaine, Robin. Standard Change Tracking for XML.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.LaFontaine01.
[Lee] Lee, David. NoXML: Extending the relevance of XPath by breaking the chains of the DOM.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Lee01.
[Leibniz] Leibniz, Gottfried
Wilhelm. Of Universal Synthesis and Analysis;
or, of the Art of Discover and of
Judgement (c. 1683).
Translated by
G.H.R. Parkinson. Pages 10-17 in Philosophical Writings,
ed. G.H.R. Parkinson, tr. Mary Morris and G.H.R.
Parkinson.
London: J.M. Dent; Rutland, Vermont: Charles E. Tuttle,
1973; rpt. 1995.
[Lubell] Lubell, Joshua. XForms User Interfaces for Small Arcane Nontrivial Datasets.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Lubell01.
[Lumley] Lumley, John. Analysing XSLT Streamability.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Lumley01.
[Miłowski] Miłowski, R. Alexander. XML on the Web.
Presented at Symposium on HTML5 and XML, Washington, DC, August 4, 2014. In Proceedings of the Symposium on HTML5 and XML. Balisage Series on Markup Technologies, vol. 14 (2014). doi:https://doi.org/10.4242/BalisageVol14.Milowski02.
[Miłowski and Walsh] Miłowski, R. Alexander, and Norman Walsh. How to survive the coming namespace winter.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Milowski01.
[Nordström] Nordström, Ari. Multilevel Versioning.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Nordstrom01.
[Peroni, Poggi, and Vitali] Peroni, Silvio, Francesco Poggi and Fabio Vitali. Overlapproaches in documents: a definitive classification (in OWL, 2!).
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Peroni01.
[Piez] Piez, Wendell. Hierarchies within range space: From LMNL to OHCO.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Piez01.
[Quin] Quin, Liam R. E. Markup Formats In Context: A comparison of the strengths of some widely-used markup
systems.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Quin01.
[Rennau] Rennau, Hans-Jürgen. XQuery topic tools - concept, user interface, development framework.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Rennau01.
[Russell] Russell, Bertrand.
A Critical Exposition of the Philosophy of Leibniz
with an Appendix of Leading Passages.
London: Allen & Unwin, 1900; new edition 1937, rpt. 1958.
[Sayih, Kuhn, and Brüggemann-Klein] Sayih, Marouane, Martin Kuhn and Anne Brüggemann-Klein. GameX — Event-Based Programming with XML Technology.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Bruggemann-Klein01.
[Sperberg-McQueen, Marcoux, and Huitfeldt] Sperberg-McQueen, C. M., Yves Marcoux and Claus Huitfeldt. Document lattices: Equivalence, compatibility, and contradiction in document markup.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Sperberg-McQueen01.
[van der Zander] van der Zander, Benito. Extending XQuery with pattern matching over XML, HTML and JSON, and its usage for
data mining.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Zander01.
[Walmsley] Walmsley, Priscilla. NIEM: Implementation experience.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Walmsley01.
[Webber] Webber, David R. R. Meeting the Twin Challenges of Open Data for DATA Act compliance and Delivering next
generation Industry Services: Leveraging XML template and dictionary technologies
with public semantic methods for real-time service interface delivery.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Webber01.
[Wicentowski] Wicentowski, Joseph. Using XML to publish the Foreign Relations of the United States series.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Wicentowski01.
[Wittgenstein] Wittgenstein, Ludwig. Tractatus Logico-Philosophicus. Translated by D. F. Pears and B. F. McGuinness. London: Routledge and Kegan Paul,
1974.
×Anderson, Clifford B. On Teaching XQuery to Digital Humanists.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Anderson01.
×Bausenbach, Ardie, and Kate Zwaard. The Library of Congress Recommended Format Specifications.
Presented at Symposium on HTML5 and XML, Washington, DC, August 4, 2014. In Proceedings of the Symposium on HTML5 and XML. Balisage Series on Markup Technologies, vol. 14 (2014). doi:https://doi.org/10.4242/BalisageVol14.Bausenbach01.
×Berjon, Robin. Mending Fences and Saving Babies.
Presented at Symposium on HTML5 and XML, Washington, DC, August 4, 2014. In Proceedings of the Symposium on HTML5 and XML. Balisage Series on Markup Technologies, vol. 14 (2014). doi:https://doi.org/10.4242/BalisageVol14.Berjon01.
×Braaksma, Abel. In pursuit of streamable stylesheet functions in XSLT 3.0.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Braaksma01.
×Brüggemann-Klein, Anne, Mustapha Maalej and Marouane Sayih. Identity constraints for XML.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Maalej01.
×Cagle, Kurt. Semantics and the Internet of Things.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Cagle01.
×Case, Pat. When 57,300,000 Full Text Search Results Are Just Too Many.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Case01.
×Couturat, Louis.
La Logique de Leibniz d’après des document inédits.
Paris: Félix Alcan, 1904.
×Denicola, Domenic. Non-Extensible Markup Language.
Presented at Symposium on HTML5 and XML, Washington, DC, August 4, 2014. In Proceedings of the Symposium on HTML5 and XML. Balisage Series on Markup Technologies, vol. 14 (2014). doi:https://doi.org/10.4242/BalisageVol14.Denicola01.
×DeRose, Steven J. JSOX: A Justly Simple Objectization for XML: Or: How to do better with Python and
XML.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.DeRose02.
×DeRose, Steven J. What do we still lack? Or: Prolegomena to any future hypertext system.
Presented at Symposium on HTML5 and XML, Washington, DC, August 4, 2014. In Proceedings of the Symposium on HTML5 and XML. Balisage Series on Markup Technologies, vol. 14 (2014). doi:https://doi.org/10.4242/BalisageVol14.DeRose01.
×Dineen, Scott, R. Alexander Miłowski, Kennett Rawson and Lauren Wood. MathML: Technology and practice.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.MathMLPanel01.
×Fearon, Phil. Practical Processing of HTML5 as XML and XML as HTML5.
Presented at Symposium on HTML5 and XML, Washington, DC, August 4, 2014. In Proceedings of the Symposium on HTML5 and XML. Balisage Series on Markup Technologies, vol. 14 (2014). doi:https://doi.org/10.4242/BalisageVol14.Fearon01.
×Harvey, Betty. Methodology For Providing National Information Exchange Model (NIEM) Model Understanding
to XML and NIEM Novices.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Harvey01.
×Kimber, Eliot, and George Bina. RELAX NG and DITA: An Almost Perfect Match.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Kimber01.
×La Fontaine, Robin. Standard Change Tracking for XML.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.LaFontaine01.
×Lee, David. NoXML: Extending the relevance of XPath by breaking the chains of the DOM.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Lee01.
×Leibniz, Gottfried
Wilhelm. Of Universal Synthesis and Analysis;
or, of the Art of Discover and of
Judgement (c. 1683).
Translated by
G.H.R. Parkinson. Pages 10-17 in Philosophical Writings,
ed. G.H.R. Parkinson, tr. Mary Morris and G.H.R.
Parkinson.
London: J.M. Dent; Rutland, Vermont: Charles E. Tuttle,
1973; rpt. 1995.
×Lubell, Joshua. XForms User Interfaces for Small Arcane Nontrivial Datasets.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Lubell01.
×Lumley, John. Analysing XSLT Streamability.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Lumley01.
×Miłowski, R. Alexander. XML on the Web.
Presented at Symposium on HTML5 and XML, Washington, DC, August 4, 2014. In Proceedings of the Symposium on HTML5 and XML. Balisage Series on Markup Technologies, vol. 14 (2014). doi:https://doi.org/10.4242/BalisageVol14.Milowski02.
×Miłowski, R. Alexander, and Norman Walsh. How to survive the coming namespace winter.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Milowski01.
×Nordström, Ari. Multilevel Versioning.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Nordstrom01.
×Peroni, Silvio, Francesco Poggi and Fabio Vitali. Overlapproaches in documents: a definitive classification (in OWL, 2!).
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Peroni01.
×Piez, Wendell. Hierarchies within range space: From LMNL to OHCO.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Piez01.
×Quin, Liam R. E. Markup Formats In Context: A comparison of the strengths of some widely-used markup
systems.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Quin01.
×Rennau, Hans-Jürgen. XQuery topic tools - concept, user interface, development framework.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Rennau01.
×Russell, Bertrand.
A Critical Exposition of the Philosophy of Leibniz
with an Appendix of Leading Passages.
London: Allen & Unwin, 1900; new edition 1937, rpt. 1958.
×Sayih, Marouane, Martin Kuhn and Anne Brüggemann-Klein. GameX — Event-Based Programming with XML Technology.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Bruggemann-Klein01.
×Sperberg-McQueen, C. M., Yves Marcoux and Claus Huitfeldt. Document lattices: Equivalence, compatibility, and contradiction in document markup.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Sperberg-McQueen01.
×van der Zander, Benito. Extending XQuery with pattern matching over XML, HTML and JSON, and its usage for
data mining.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Zander01.
×Walmsley, Priscilla. NIEM: Implementation experience.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Walmsley01.
×Webber, David R. R. Meeting the Twin Challenges of Open Data for DATA Act compliance and Delivering next
generation Industry Services: Leveraging XML template and dictionary technologies
with public semantic methods for real-time service interface delivery.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Webber01.
×Wicentowski, Joseph. Using XML to publish the Foreign Relations of the United States series.
Presented at Balisage: The Markup Conference 2014, Washington, DC, August 5 - 8,
2014. In Proceedings of Balisage: The Markup Conference 2014. Balisage Series on Markup Technologies, vol. 13 (2014). doi:https://doi.org/10.4242/BalisageVol13.Wicentowski01.
×Wittgenstein, Ludwig. Tractatus Logico-Philosophicus. Translated by D. F. Pears and B. F. McGuinness. London: Routledge and Kegan Paul,
1974.