How to cite this paper
Sperberg-McQueen, C. M. “Opening up and closing down.” Presented at Balisage: The Markup Conference 2011, Montréal, Canada, August 2 - 5, 2011. In Proceedings of Balisage: The Markup Conference 2011. Balisage Series on Markup Technologies, vol. 7 (2011). https://doi.org/10.4242/BalisageVol7.Sperberg-McQueen02.
Balisage: The Markup Conference 2011
August 2 - 5, 2011
Balisage Paper: Opening up and closing down
C. M. Sperberg-McQueen
C. M. Sperberg-McQueen is a consultant specializing in
preserving and providing access to cultural and scientific
data. He has served as co-editor of the XML 1.0
specification, the Guidelines of the
Text Encoding Initiative, and the XML Schema Definition
Language (XSD) 1.1 specification. He holds a doctorate in
comparative literature.
Copyright © 2011 C. M. Sperberg-McQueen
Abstract
Negotiating tradeoffs of flexibility and reliability, freshness and
permanence.
One of the reasons to come to a conference like this is to
learn new things. And as Tommie Usdin reminded us in the opening
session Usdin, confrontation with other people
and listening to things other people have done, including (or even
especially including) things we don’t expect in advance to
find especially interesting, can open our minds to new possibilities.
And opening new possibilities is, in a way, how many of us presumably
got here.
One reason to get interested in a new technology or a new
programming language or a new markup language or a new kind of
document language is that it opens up possibilities that you
didn’t have before. By looking at SGML and thinking about what
SGML gave us, those of us who got interested in descriptive markup
in the form of SGML learned about ways in which WordPerfect or
WordStar or Waterloo Script or troff had been closing down
possibilities for us — possibilities we hadn’t missed
because we hadn’t thought about them, because the tool shapes the
hand that wields it and the intellectual tool shapes the mind that
uses it. If your document description language is focused, as so
many of them were and are, on putting ink on paper, then you’re
going to think that document description and information management
is about fonts and layout. And it’s only when you get a language
that says, “No, documents are about more than that.” (a
language that ironically received much of its original impetus from
typesetters, who know perfectly well that fonts can change in
different renderings of the same document) — it’s only when
you see that that you begin to think about other possibilities.
And that opening up of possibilities can easily give you the
sense that you are seeing the dawn of a new day.
That’s why some of us at least got interested in
descriptive markup as opposed to WYSIWYG. (As Brian Kernighan is
quoted as saying, “The problem with ‘what you see is
what you get’ is that what you see is all you get.”
And when that’s all you’ve ever gotten, you may not miss the
things that it doesn’t give you.) That’s one of the reasons
that other people got interested in XML as opposed to straight
HTML. That’s one reason people get interested in multiple
languages instead of a single language or, generalizing, in
a metalanguage as opposed to an object language.
New technologies can open new possibilities. So new specs
defining new technologies are always interesting. In this context
recall the papers we got from Cornelia Davis about the systematic
application of new specs in the XML space (and some old specs in
the XML space!) to make new things possible Davis, or Eric Freese’s report on EPUB3 Freese, or Piotr Bański’s application of
the TEI ODD system to the ISO Linguistic Markup Framework Banski. All of these are showing us ways of opening
up new possibilities.
Sometimes new views on existing specs can open up some
surprising perspectives, as we found on Tuesday, when
Hans-Jürgen Rennau reported on the work that he and David Lee
had done on XDML Rennau. And sometimes we get
new perspectives, new possibilities, from the collision of
existing specs: think of Kurt Cagle talking about the collsion of
XQuery and SparQL Cagle. In the other track
about the same time, Julien Seinturier had some very interesting
remarks about the different way that domain specialists — in
his case, linguists — react to XQuery and SparQL Seinturier that made me think about ways it might
make sense to change what I teach when I teach XQuery.
Sometimes the collision with other specs leads us to seek
better ways of co-existence. You know, sometimes you’d really
like to conquer the world and get the other spec out of the way.
(SGML did manage to displace ODA; that was a long hard struggle,
but I haven’t heard anybody but SGML people mention ODA for a
long, long time.) But in the short term, XML is going to co-exist
with JSON for quite a while, so I’m glad to see David Lee
looking for a way to get back-and-forth without information loss
and with at least reasonably plausible translation results so the
product of the transformation doesn’t always sound like the
output of a badly trained machine translation program Lee. Even within the XML community, XQuery and
XSLT are going to co-exist for a while, so Evan Lenz’s ideas
for hybridizing them may give us new perspectives for the future
Lenz.
People talk about turning over a new leaf. It took me a long
time to understand that idiom. Eventually, I concluded that in
its original sense it refers to a school boy who has spoiled one
page by spilling too much ink turning over a new page, and now
it’s a fresh new opening. There’s nothing wrong with this
new leaf; it’s full of possibility. The French poet,
Stéphane Mallarmé, talks about this. He talks about
the empty page whose blankness defends it. Why does the blankness defend
the page?
Because the page is currently full of possibilities, and as soon
as I write a word on it, I’ve spoiled an infinite number of
them. This thought can lead to paralysis, of course.
Sometimes those of us who are predicting the dawn of a new day
will also assume or predict that everyone is going to see the
beauty of these possibilites, rejoice in them, and adopt this new
technology. And sometimes the rosy possibilities that we foresee
will accrue only if everybody does adopt the new technology. Some
of you will be thinking, along about now, about all of the
rhetoric that we regularly hear — I associate it especially
with the web application development community — about the
advantages of network effects and the advantages of ubiquity. I
think the biologist Richard Dawkins has spawned
(particularly in the IT community for some reason) a whole
generation of intellectual social Darwinists who evaluate ideas
exclusively in termes of how widely they spread, and not how how
coherent or true they might be. This is a sort of
propagandists’ view of ideas, whose coarseness and
naïveté would shame even the most outrageous of the
19th century thinkers who invented that misapplication of
biological theory to human ethics which is social
Darwinism.
So it’s nice to have some cases where advantages will
accrue and new possibilities will open up regardless of whether
anybody else sees them or adopts the new technology or not, where
the advantages accrue for individuals and not just huge
collectives. It's nice because it frees us from the need to make
technological choices based solely on our predictions about what
other people are going to think is good technology; it allows us
to evaluate the technology on its merits. As technical people,
most of us are likely to be better at evaluating the technical
merits of technologies than at predicting the crowd psychology of
marketplaces. As Michael Kay said several times during this
conference, “I really hate to try to make predictions about
the future. It’s very hard.” That’s true for all of
us.
I had occasion recently to
reread the paper published by James Coombs, Allen Renear, and
Steve DeRose in the November 1987 Communications of the ACM, and I was struck that they say:
We do not advocate waiting for SGML to become dominant. As we
have illustrated, [Aside: Only illustrated, not
proven? Interesting, Dr. Renear.] descriptive
markup is vastly superior to both presentational and procedural
markup. The superiority of descriptive markup is not dependent
on its becoming a standard; instead, descriptive markup is the
basis of the standard because of its inherent superiority over
other forms of markup.
Those of us who have converted to descriptive markup are
already enjoying some of these outlined benefits. ... Coombs
I think that’s true, and it’s useful to focus on the
benefits you can get by adopting a technology even if the browser
makers or the large community of web application developers never
see it and never adopt it. In that respect I’m particulary
grateful to Eric van der Vlist for showing us how we can bring
hypertext on the web forward into the 1960s
with multi-ended links
that run in browsers today without requiring that we persuade the
guys at Netscape or Opera or those other places Vlist01.
On the other hand, as that quotation from Mallarmé
illustrates, turning over a new leaf may open so many new
possibilities that the experience becomes a little paralyzing.
Working with metalanguages can make things so abstract it’s
hard to find your way. And working with standards or
specifications that leave a lot freedom to their implementors and
their uses can leave so many possibilities open that you wonder
whether you gained anything at all by adopting the standard or
using the specification. When you’re using PDFs, at least
there is a Big Daddy over there in San Jose that tells the world
what PDFs mean, and it doesn’t matter how gnarly they get
inside, because the only people who see the gnarliness inside a
PDF file are the people who write PDF display software, and most
of them work for Adobe and are very well paid, and the others will
either do the same thing the Adobe browser does or they will be
punished, and we don’t need to worry about it.
XML, SGML, descriptive markup in general, takes away that
comforting dependence on somebody else to decide what’s
important and what counts. It puts the responsibility on you, and
that can be frightening. Even when we do not find it frightening,
it can be an onerous responsibility. As a well-known critic of
SGML (Darrell Raymond) once said, “Descriptive markup frees
authors from the tyranny of typography only to plunge them
headlong into the hellfire of ontology.”
You see traces of that difficulty in the problem that Norm
Walsh talked about: the difficulty of finding an editor that end
users will like that preserves the freedom that XML gives you
Walsh. It’s a lot easier to write simple
interfaces if the user doesn’t have that many possibilities.
Sometimes freedom is bewildering and threatening. The report by
Ravit David and others, on their experience loading XML ebooks,
illustrates those difficulties David. Jeff
Beck’s report on the self-delusions you may fall into when
you’re working in a closed system Beck is
also relevant here. One of the sentences that people will
remember from this conference is the observation “If XML is
like a conversation, running a closed XML system is like listening
to the voices in your head.”
And, of course, when anything at all is possible, the person
you’re listening to may be lying. I bet most of you didn’t
realize that Lynne Price’s game was part of the technical
program. I didn’t either until I realized: it’s another
instantiation of this problem. Infinite possibilities include infinite methods of
deceit and threat. And even if your interlocutors are not trying
to deceive you, they may just be plain wrong, as Ken Sall mentioned
this morning Sall. Did Jeff Beck really play
on that record, and if so, was it this Jeff Beck or was it a
different Jeff Beck? Maybe I have always thought that Jeff was a
great guitar player in his spare time — I only saw one side
of him, as a markup geek at the National Library of Medicine,
but in his secret life he was a rock star.
[Laughter.]
Sometimes, possibly for this reason — it seems related
— the way forward seems to be not to open up more
possibilities but to start closing possibilities down. That’s
why many people thought then and think now that Scribe was a step
forward vis-à-vis troff. Why? Because it cut off a whole
lot of spaces, particularly spaces of bad typography and bad
document structure, and put a very limited palette of
possibilities in front of the user. It’s one of the things
that the designers of XML tried to accomplish with XML: to close
off some of the possibilities of SGML. Let’s not fool
ourselves; there is a loss of flexibility in XML vis-à-vis
SGML. The design goal was to make that loss of flexibility
involve the things you don’t care about and not the things you
do care about. So XML doesn’t reduce possibilities for the
document owners or the vocabulary designers, only for the software
developers. There’s a trade-off: You have to think
“Which is more valuable to you? Having more conforming
parsers than you can count on the fingers of a single hand (and
more non-conforming parsers than you can count on fingers and
toes, and possibly some fingers and toes of some friends)? Or, on
the other hand, the ability to use backslash rather than ampersand
as a general entity reference opener?” On the whole, I’m
happy to give up on a backslash as a general entity reference open
delimiter in order to have more parsers. So I’m happy with
that trade-off. And those of us who prefer to use SGML can still
use SGML because it’s there; it’s a spec, and its
implementations are not going away, although, as far as I know,
most of them have not been updated recently.
That same simplification through reducing choices is probably
responsible for the fact that many, many more people use TEI-Lite
than full TEI. You can’t use full TEI without making an awful
lot of decisions. Quite often they’re decisions that you
don’t feel in a position to make upfront because until
you’ve had some experience you don’t know whether you want
that module or not. Blind interchange similarly requires closing
off possibilities and reducing variation. (We’ll come back to
that.) To write any text at all on the page — even a text
as beautiful as the poem Sea
Breeze — Mallarmé Mallarme had to take a stand; he had to put something
down on the page. He had to shut out an infinite number of other
possible texts in order to get the text he did write, and he had
to risk an ink blot or two. Similarly, growing up requires that
you stop changing your mind about what you want to be when you
grow up and start focusing; at least, that’s what I’ve been
told by people who have grown up.
Even at relatively low levels, regularity, predictability, and
explicit structure can open up new possibilities. The simple
regularity of markup — the ability to distinguish markup
from non-markup — allows Daniel Jettka and Maik
Stührenberg to produce tools to do visualization of documents
Jettka. You know, visualizing the structure of TeX
documents or troff documents would be an NP-complete problem. It
would require artificial intelligence because you would
essentially have to write a troff processor and then you’d have
to do artificial intelligence to analyze the shape of the page to
decide what the structure of the document was. Having the
structure explicitly marked allows Jettka and Stührenberg to
draw trees without occupying the mainframe computer center for
three weeks in order to document the structure of the first
document they try, and then another three weeks to get the second
one to compare with it. Consider the paper of Jean-Yves
Vion-Dury, in which he exploits the distinction between markup and
content in order to encrypt them differently so that you can
semi-trust people, semi-trusting your service vendors so they can
perform operations without having to understand either the
operations or your documents fully Vion-Dury. You couldn’t do that if you
didn’t have a reliable guide to the structure that you’re
exposing to them (or, in some cases, potentially hiding from them)
independent of the content. Or recall the work that Jacques
Durand did in XTemp Durand; I think that’s
aided by
the generality at one level and the utter predictability at
another level of the message structure of the messages they have
to deal with.
Now, the difficulties of uncertainty and freedom, and the
advantages of restriction — I’m tempted to say
“voluntary slavery” — apply not just at the
psychological level and not just at the application level (even at
a very low level). Why are Michael Kay and O’Neil Delpratt
able to optimize certain things and not others Delpratt? They are able to optimize those things
which they can decide at compile time, and they cannot optimize
things which cannot be determined until run time. If they know,
at compile time, that certain things are not going to happen at
run time, then they can compile into byte code, and the stylesheet
can run faster. Freedom leads to uncertainty, and uncertainty
leads to doubt, and doubt leads to ... slower processing ... at
many, many levels.
Eliminating the possibilities that you don’t want to
exercise can help in a lot of ways. To inter-operate with other
software, to make XML data more tractable for people who don’t
have an XML mindset requires clean APIs — requires APIs like
the one Liam Quin sketched, which try to make XML easier to write
for people who don’t think in XML Quin. We
may wish that they would just start using native XML programming
languages like XQuery and XSLT, but historically they have
perversely enough exercised their freedom not to do so. And rather
than putting them in chains and forcing them, it’s probably a
good idea to try to make things easy. Similarly, I’m happy to
see the work on XML serialization into C# and Java Objects that
Carlos Jaimez-Gonzalez described Jaimez-Gonzalez, or the packaging work that Chris
Maloney described this morning Maloney. To
make progress on certain tasks, including the development of
applications, can just require that you put your head down and you
make decisions and you put a stake in the ground and you risk an
ink blot or two.
Recall the work of Julien Seinturier on XML engines for
multimodal linguistic annotations Seinturier,
or similarly, Alexander “Sasha” Schwarzman’s
description of the work he and others have done dealing with all
the complications of supplemental material for journal articles
Schwarzman. Those case studies reflect one of
the kinds of work we need to do in order to make XML realize some
of the possibilities for our culture that we want it to realize,
even though, of course, one of the first things you discover when
you do that kind of work is that you’ve uncovered a whole lot
of new problems you didn’t know you had before, including
the perennial problem “Oh, this is really interesting, but
who is actually going to pay for this additional work for the
preservation of this additional material?” We now know
it’s essential, but we haven’t necessarily found new
income.
I have seen the future, and it will require better
documentation.
That’s part of the cussedness of human beings. Human
beings can do things in a whole lot of different ways. We can
understand each other because out of the infinite number of
possible human languages, there is only a finite number of human
languages actually in use at any given time and an even smaller
finite number that are typically used in a particular location. So
if we address each other in 21st-century English or 21st-century
French, here in Montréal, we’re likely to succeed fairly
quickly in finding a language that our interlocutor understands.
Fortunately, we can restrict our attention to 21st-century English
and 21st-century French — We don’t have to experiment
with Chaucerian English or Old High German or Gothic or Uyghur or
historical versions of that very large number of attested
languages. But if we want to preserve our information for the
long-term, the recipients of that information in the long-term
won’t have all of the context that we have that reduces the
number of possibilities we have to try. So as David Dubin reminded
us the other day Dubin, we are going to have to document things a
lot better than we might think. It is precisely those things
which are so obvious that we don’t think of them as needing
documentation, the things that would be almost an insult to say
explicitly (because it would make your interlocutor wonder whether
you doubt their mental capacity) that may be most important to
document. So I’m glad that people like David Dubin or people
like Daan Broeder and Andreas Witt and Oliver Schonefeld and their
colleagues are working on long-term preservation [Dubin,
Broeder] and how to make it work, and trying to
figure out how to elicit the documentation that we need and how to
store it in ways that people in the future will have a fighting
chance at understanding.
Sometimes we may resist nailing things down because there’s
the risk of getting it wrong; there is the risk of foreclosing
possibilities that were really what we wanted. And I’ll tell
you a story I’m embarrassed about: Sometime late in the
development of the XML 1.0 spec, Dan Connolly who was the W3C
staff contact for the Working Group, said to me, “What is an
XML document? Of what set of objects is the set of XML documents
a subset?” And I said, rather guardedly, “Why do you
want to know?” I had worked enough with Dan that I was
worried about a trap. And he said, “It’s just a sort of
intellectual hygiene; it’s part of the definition of any set.
Modern set theory says if you want to make sure that it’s safe
to apply the axioms and theorems of set theory, you have to define
sets in certain ways to avoid well-known intellectual
problems.” And I said, “You know, I really don’t
think that we’re going to run into Russell’s Paradox if we
don’t specify whether an XML document is a string of characters
or an abstract structure of some kind, so no, I don’t want to
go there.” I have no idea whether anybody else in the Working
Group thought about this problem; Dan and I, I think, were talking
offline, and I resisted because I did not want to nail down the
nature of an XML document in a specific way because I foresaw that
other people would use that to sort of close down other
possibilities. And the result has, of course, cost later working
groups some indeterminate number of months or years
trying to patch problems in the formal underpinnings of their
specifications. Some XML specifications, for example, appeal to
the notion of identity of XML elements — these two things,
they may say, are the same if they are, or are derived from, the
same XML element. Sometimes, working group members are surprised
to discover that their specification has a hole in its foundation
as a result, since the XML spec doesn’t actually define
identity criteria for XML documents or XML elements. If identity
is not defined for XML elements, you cannot appeal to identity of
XML elements to determine the identity or non-identity of other
things of interest to your specification. And it’s my fault.
It would be better to have nailed it down. It would be better
to have identified the possibilities I was trying to keep open and
define each of them, so that we had terms with which we could talk
about them.
Trying to avoid nailing things down may be understandable when
you’re trying to preserve possibilities, but trying to avoid
clarity is not the right way. So I’m grateful to Allen Renear
and his colleagues, even though I disagree with them on the nature
of the identifier I42. I’m grateful to
them for asking the question “What is the logical form of a
metadata record?” Renear It’s the right kind of question; it’s an
essential kind of question that we need to ask. I’m grateful to
Walter Perry for asking a related question Perry. It’s true that you may risk, when you
talk about these things, having someone stand up and say,
“I’m lost.” But we have to ask these questions; it will take us a
while to find answers that we can successfully communicate to each
other, but if we don’t ask the questions, we’re never going
to get there. And I’m grateful to Lars Johnsen and Claus
Huitfeldt for the same reason; I don’t understand what those
lattices they are talking about mean, but I now know there’s
something I have to work on understanding Johnsen.
But now we have a contradiction. (Some of you will have
noticed this some time ago, but it took me a while to get around
to it.) Now we have a contradiction because on the one hand we
want openness and we want freedom and we want flexibility, but we
notice that that sometimes leads to paralysis. And on the other
hand, we want to avoid closed-down options and foreclosure of
possibilities and inflexibility and rigidity, but if we also want
blind interchange and interoperability, that seems to be the way
to get there. How do we balance these competing interests? Do we
have to choose one and let the other one go to the dogs? Can we
trade them off somehow? Do we have to choose whether to eat the
cake today or save it for tomorrow? Or can we have them both?
And what I think of as the big theme of this conference is
precisely the question: Can we have them both? In his paper,
Eliot Kimber explained the way DITA has striven to make the notion
of architectural forms concrete and executable and managed to make
DITA comprehensible in ways that I had ever managed to find it
before Kimber. The DITA approach, as he
described it, seems to provide a way to have a certain kind of
freedom within constrained limits and provide precisely the
constraints that you need in order to allow at least a certain
level of quality in default processing — to have at least
some of the advantages of both poles. And if things work right
and if you get the right set of primitive types, maybe the
advantages you have over here that are preserved and the
advantages you have over there that are preserved and not lost are
the ones you care about, and the advantages that you’ve lost
are the ones you didn’t care about, like changing your
delimiters.
Syd Bauman’s talk touched on that very topic of interchange
and interoperability Bauman. Wendell Piez
discussed ways to provide controlled extension points, not just in
the schema and not just outside the schema: having them both
seemed to suggest a way that we may be able to manage those
difficult trade-offs Piez. I still don’t fully understand this, but the
DITA mechanism that Eliot described and the local extensibility
mechanisms that Wendell described feel to me similar at some deep
level to the definition of forward processing in processor
specs.
Another anecdote: The XML Schema Working Group was aware that
versioning was a terribly difficult problem, and we were
desperately afraid of getting it wrong, so in XSD 1.0 we in fact
said nothing. The XSLT Working Group — the group that
developed XSLT 1.0 — was also aware that it was important,
and they were also afraid of getting it wrong, but they took a
risk and defined a forward processing mode for XSLT 1.0
processors. And as Michael (Kay), or anybody who has actually
worked with XSLT 1.0 processors in the presence of XSLT 2.0
stylesheets, will tell you, they didn’t get it completely
right, and the parts they did get right weren’t always
correctly implemented, but ask yourself: “Which set of users
is currently in a better situation in the presence of new
constructs? The users of XSD 1.0 or the users of XSLT 1.0?”
I am in both sets, and I tell you I am a lot happier as an XSLT
programmer than I am as a schema writer, because the one way to be
absolutely sure of getting it entirely wrong is to say nothing in
a misguided attempt to leave all of your options open.
There comes a time for opening things up, and there is a time
for closing things down. One part of growing up is realizing that
some problems don’t have a permanent solution, so the times
for opening things up and the times for closing things down are
likely to alternate, and you’ll have to decide what kind of
time is this. We’re not going to find any permanent
solution to the problem of deciding which kinds of freedom we have
to preserve and which kinds of voluntary slavery are worth entering
into. Those are questions that require human judgment. One of the
things we can do as technology people is to help make tools that
will support that human judgment and allow humans to make human and
humane judgments and not be consumed by clerical work. To make
those tools we are, from time to time, going to have to close down
some possibilities, to keep things as simple as they can be (but
remember security), to take a stand, to put a stake in the ground,
to risk an ink blot, to work not just at a metalanguage level or the
meta-metalanguage level or the meta-meta-metalanguage level, but
— gasp! — at the object level, to write real documents
and real vocabularies and to say relatively concrete things about
relatively concrete entities.
Right now it’s time to close down Balisage 2011 so you
can all go home and do that work. Soon enough it will be time for
the pendulum to swing in the other direction, when what you will
want to do is to think about the new things that you learned here
that you haven’t thought before, to look at your problems
from a new angle, to open up your mind again to new possibilities.
I can think of a really good place to open up your mind to new
possibilities, to do that kind of thing. So “So long!”
and I look forward to seeing you in Montréal for Balisage
2012.
References
[Banski] Bański, Piotr. “Literate
serialization of linguistic metamodels.” Presented at
Balisage: The Markup Conference 2011, Montréal, Canada,
August 2 - 5, 2011. In Proceedings of
Balisage: The Markup Conference 2011. Balisage Series on
Markup Technologies, vol. 7 (2011). doi:https://doi.org/10.4242/BalisageVol7.Banski01.
[Bauman] Bauman, Syd. “Interchange
vs. Interoperability.” Presented at Balisage: The
Markup Conference 2011, Montréal, Canada, August 2 - 5,
2011. In Proceedings of Balisage: The Markup
Conference 2011. Balisage Series on Markup Technologies,
vol. 7 (2011). doi:https://doi.org/10.4242/BalisageVol7.Bauman01.
[Beck] Beck, Jeff. “The
False Security of Closed XML Systems.” Presented at
Balisage: The Markup Conference 2011, Montréal, Canada,
August 2 - 5, 2011. In Proceedings of
Balisage: The Markup Conference 2011. Balisage Series on
Markup Technologies, vol. 7 (2011). doi:https://doi.org/10.4242/BalisageVol7.Beck01.
[Broeder] Broeder, Daan, Oliver Schonefeld,
Thorsten Trippel, Dieter Van Uytvanck and Andreas
Witt. “A
pragmatic approach to XML interoperability — the Component
Metadata Infrastructure (CMDI).” Presented at
Balisage: The Markup Conference 2011, Montréal, Canada,
August 2 - 5, 2011. In Proceedings of
Balisage: The Markup Conference 2011. Balisage Series on
Markup Technologies, vol. 7 (2011). doi:https://doi.org/10.4242/BalisageVol7.Broeder01.
[Cagle] Cagle, Kurt. “When
XQuery and SparQL Collide.” Presented at Balisage: The
Markup Conference 2011, Montréal, Canada, August 2 - 5,
2011. In Proceedings of Balisage: The Markup
Conference 2011. Balisage Series on Markup Technologies,
vol. 7 (2011). doi:https://doi.org/10.4242/BalisageVol7.Cagle01.
[Coombs] Coombs, James H., Allen H. Renear and
Steven J. DeRose. “Markup systems and the future of
scholarly text processing.” Communications of the ACM 1987 Nov;
30(11):933-947. doi:https://doi.org/10.1145/32206.32209
[David] David, Ravit H., Shahin Ezzat Sahebi,
Bartek Kawula and Dileshni Jayasinghe. “Challenges
and Potential of Local Loading of XML Ebooks.”
Presented at Balisage: The Markup Conference 2011, Montréal,
Canada, August 2 - 5, 2011. In Proceedings of
Balisage: The Markup Conference 2011. Balisage Series on
Markup Technologies, vol. 7 (2011). doi:https://doi.org/10.4242/BalisageVol7.David01.
[Davis] Davis, Cornelia. “Programming
Application Logic for RESTful Services Using XML
Technologies.” Presented at Balisage: The Markup
Conference 2011, Montréal, Canada, August 2 - 5, 2011. In
Proceedings of Balisage: The Markup Conference
2011. Balisage Series on Markup Technologies, vol. 7
(2011). doi:https://doi.org/10.4242/BalisageVol7.Davis01.
[Delpratt] Delpratt, O’Neil Davion, and
Michael Kay. “The
Effects of Bytecode Generation in XSLT and XQuery.”
Presented at Balisage: The Markup Conference 2011, Montréal,
Canada, August 2 - 5, 2011. In Proceedings of
Balisage: The Markup Conference 2011. Balisage Series on
Markup Technologies, vol. 7 (2011). doi:https://doi.org/10.4242/BalisageVol7.Delpratt01.
[Dubin] Dubin, David, Karen Wickett and Simone
Sacchi. “Content,
Format, and Interpretation.” Presented at Balisage:
The Markup Conference 2011, Montréal, Canada, August 2 - 5,
2011. In Proceedings of Balisage: The Markup
Conference 2011. Balisage Series on Markup Technologies,
vol. 7 (2011). doi:https://doi.org/10.4242/BalisageVol7.Dubin01.
[Durand] Durand, Jacques, Hyunbo Cho, Dale
Moberg and Jungyub Woo. “XTemp:
Event-driven Testing and Monitoring of Business processes:
Leveraging XML, XPath and XSLT for a Practical Event
Processing.” Presented at Balisage: The Markup
Conference 2011, Montréal, Canada, August 2 - 5, 2011. In
Proceedings of Balisage: The Markup Conference
2011. Balisage Series on Markup Technologies, vol. 7
(2011). doi:https://doi.org/10.4242/BalisageVol7.Durand01.
[Freese] Freese, Eric. “Report
from the Front: EPUB3.” Presented at Balisage: The
Markup Conference 2011, Montréal, Canada, August 2 - 5,
2011. In Proceedings of Balisage: The Markup
Conference 2011. Balisage Series on Markup Technologies,
vol. 7 (2011). doi:https://doi.org/10.4242/BalisageVol7.Freese01.
[Jaimez-Gonzalez] Jaimez-Gonzalez, Carlos R.,
Simon M. Lucas and Erick J. Lopez-Ornelas. “Easy
XML Serialization of C# and Java Objects.” Presented
at Balisage: The Markup Conference 2011, Montréal, Canada,
August 2 - 5, 2011. In Proceedings of
Balisage: The Markup Conference 2011. Balisage Series on
Markup Technologies, vol. 7 (2011). doi:https://doi.org/10.4242/BalisageVol7.Jaimez01.
[Jettka] Jettka, Daniel, and Maik
Stührenberg. “Visualization
of concurrent markup: From trees to graphs, from 2D to
3D.” Presented at Balisage: The Markup Conference
2011, Montréal, Canada, August 2 - 5, 2011. In Proceedings of Balisage: The Markup Conference
2011. Balisage Series on Markup Technologies, vol. 7
(2011). doi:https://doi.org/10.4242/BalisageVol7.Jettka01.
[Johnsen] Johnsen, Lars G, and Claus
Huitfeldt. “TagAl:
A tag algebra for document markup.” Presented at
Balisage: The Markup Conference 2011, Montréal, Canada,
August 2 - 5, 2011. In Proceedings of
Balisage: The Markup Conference 2011. Balisage Series on
Markup Technologies, vol. 7 (2011). doi:https://doi.org/10.4242/BalisageVol7.Johnsen01.
[Kimber] Kimber, Eliot. “DITA
Document Types: Enabling Blind Interchange Through Modular
Vocabularies and Controlled Extension.” Presented at
Balisage: The Markup Conference 2011, Montréal, Canada,
August 2 - 5, 2011. In Proceedings of
Balisage: The Markup Conference 2011. Balisage Series on
Markup Technologies, vol. 7 (2011). doi:https://doi.org/10.4242/BalisageVol7.Kimber01.
[Lee] Lee, David. “JXON:
an Architecture for Schema and Annotation Driven JSON/XML
Bidirectional Transformations.” Presented at Balisage:
The Markup Conference 2011, Montréal, Canada, August 2 - 5,
2011. In Proceedings of Balisage: The Markup
Conference 2011. Balisage Series on Markup Technologies,
vol. 7 (2011). doi:https://doi.org/10.4242/BalisageVol7.Lee01.
[Lenz] Lenz, Evan. “Carrot:
An appetizing hybrid of XQuery and XSLT.” Presented at
Balisage: The Markup Conference 2011, Montréal, Canada,
August 2 - 5, 2011. In Proceedings of
Balisage: The Markup Conference 2011. Balisage Series on
Markup Technologies, vol. 7 (2011). doi:https://doi.org/10.4242/BalisageVol7.Lenz01.
[Maloney] Maloney, Chris. “JATSPack
and JATSPAN, a packaging format and infrastructure for the NLM/NISO
Journal Archiving Tag Suite (JATS).” Presented at
Balisage: The Markup Conference 2011, Montréal, Canada,
August 2 - 5, 2011. In Proceedings of
Balisage: The Markup Conference 2011. Balisage Series on
Markup Technologies, vol. 7 (2011). doi:https://doi.org/10.4242/BalisageVol7.Maloney01.
[Mallarme] Mallarmé, Stéphane.
“Brise marine”.
In
Anthologie de la poésie
française,
Nouvelle Édition suvie d'un post-scriptum,
ed. Georges Pompidou
(Paris: Hachette, 1961), p. 403.
[Perry] Perry, Walter E. “REST
for document resource nodes: IPSA RE for the arcs.”
Presented at Balisage: The Markup Conference 2011, Montréal,
Canada, August 2 - 5, 2011. In Proceedings of
Balisage: The Markup Conference 2011. Balisage Series on
Markup Technologies, vol. 7 (2011). doi:https://doi.org/10.4242/BalisageVol7.Perry01.
[Piez] Piez, Wendell. “Abstract
generic microformats for coverage, comprehensiveness, and
adaptability.” Presented at Balisage: The Markup
Conference 2011, Montréal, Canada, August 2 - 5, 2011. In
Proceedings of Balisage: The Markup Conference
2011. Balisage Series on Markup Technologies, vol. 7
(2011). doi:https://doi.org/10.4242/BalisageVol7.Piez01.
[Quin] Quin, Liam. “XML
out — reducing clutter.” Presented at Balisage:
The Markup Conference 2011, Montréal, Canada, August 2 - 5,
2011. In Proceedings of Balisage: The Markup
Conference 2011. Balisage Series on Markup Technologies,
vol. 7 (2011). doi:https://doi.org/10.4242/BalisageVol7.Quin01.
[Raymond] Raymond, Darrell, Frank Tompa, and
Derick Wood.
“From Data Representation to Data Model: Meta-Semantic
Issues in the Evolution of SGML.”
Computer Standards & Interfaces
18 (1996): 25-36. doi:https://doi.org/10.1016/0920-5489(96)00033-5.
[Sall] Reck, Ronald P., Kenneth B. Sall and
Wendy A. Swanbeck. “Determining
the Impact of Eric Clapton on Music Using RDF Graphs: Selected
Challenges of Semantics Across and Within Datasets.”
Presented at Balisage: The Markup Conference 2011, Montréal,
Canada, August 2 - 5, 2011. In Proceedings of
Balisage: The Markup Conference 2011. Balisage Series on
Markup Technologies, vol. 7 (2011). doi:https://doi.org/10.4242/BalisageVol7.Sall01.
[Renear] Renear, Allen H., Richard J. Urban and
Karen M. Wickett. “Meditations
on the logical form of a metadata record.” Presented
at Balisage: The Markup Conference 2011, Montréal, Canada,
August 2 - 5, 2011. In Proceedings of
Balisage: The Markup Conference 2011. Balisage Series on
Markup Technologies, vol. 7 (2011). doi:https://doi.org/10.4242/BalisageVol7.Renear01.
[Rennau] Rennau, Hans-Jürgen, and David
Lee. “XDML
- an extensible markup language and processor for
XDM.” Presented at Balisage: The Markup Conference
2011, Montréal, Canada, August 2 - 5, 2011. In Proceedings of Balisage: The Markup Conference
2011. Balisage Series on Markup Technologies, vol. 7
(2011). doi:https://doi.org/10.4242/BalisageVol7.Rennau01.
[Schwarzman] Schwarzman,
Alexander. “Supplemental
materials to a journal article.” Presented at
Balisage: The Markup Conference 2011, Montréal, Canada,
August 2 - 5, 2011. In Proceedings of
Balisage: The Markup Conference 2011. Balisage Series on
Markup Technologies, vol. 7 (2011). doi:https://doi.org/10.4242/BalisageVol7.Schwarzman01.
[Seinturier] Seinturier, Julien, Elisabeth
Murisasco and Emmanuel Bruno. “An
XML engine to model and query multimodal concurrent linguistic
annotations: Application to the OTIM Project.”
Presented at Balisage: The Markup Conference 2011, Montréal,
Canada, August 2 - 5, 2011. In Proceedings of
Balisage: The Markup Conference 2011. Balisage Series on
Markup Technologies, vol. 7 (2011). doi:https://doi.org/10.4242/BalisageVol7.Seinturier01.
[Usdin] Usdin, B. Tommie. “Serendipity.”
Presented at Balisage: The Markup Conference 2011, Montréal,
Canada, August 2 - 5, 2011. In Proceedings of
Balisage: The Markup Conference 2011. Balisage Series on
Markup Technologies, vol. 7 (2011). doi:https://doi.org/10.4242/BalisageVol7.Usdin01.
[Vlist01] van der Vlist, Eric. “One
Href is not Enough: We need n hrefs.” Presented at
Balisage: The Markup Conference 2011, Montréal, Canada,
August 2 - 5, 2011. In Proceedings of
Balisage: The Markup Conference 2011. Balisage Series on
Markup Technologies, vol. 7 (2011). doi:https://doi.org/10.4242/BalisageVol7.Vlist01.
[Vion-Dury] Vion-Dury, Jean-Yves. “Secured
Management of Online XML Document Services through Structure
Preserving Asymmetric Encryption.” Presented at
Balisage: The Markup Conference 2011, Montréal, Canada,
August 2 - 5, 2011. In Proceedings of
Balisage: The Markup Conference 2011. Balisage Series on
Markup Technologies, vol. 7 (2011). doi:https://doi.org/10.4242/BalisageVol7.Vion-Dury01.
[Walsh] Walsh, Norman. “What
will it take to get (end user) XML editors that people will
use.” Presented at Balisage: The Markup Conference
2011, Montréal, Canada, August 2 - 5, 2011. In Proceedings of Balisage: The Markup Conference
2011. Balisage Series on Markup Technologies, vol. 7
(2011). doi:https://doi.org/10.4242/BalisageVol7.Walsh01.
×Coombs, James H., Allen H. Renear and
Steven J. DeRose. “Markup systems and the future of
scholarly text processing.” Communications of the ACM 1987 Nov;
30(11):933-947. doi:https://doi.org/10.1145/32206.32209
×Mallarmé, Stéphane.
“Brise marine”.
In
Anthologie de la poésie
française,
Nouvelle Édition suvie d'un post-scriptum,
ed. Georges Pompidou
(Paris: Hachette, 1961), p. 403.
×Raymond, Darrell, Frank Tompa, and
Derick Wood.
“From Data Representation to Data Model: Meta-Semantic
Issues in the Evolution of SGML.”
Computer Standards & Interfaces
18 (1996): 25-36. doi:https://doi.org/10.1016/0920-5489(96)00033-5.
×Usdin, B. Tommie. “Serendipity.”
Presented at Balisage: The Markup Conference 2011, Montréal,
Canada, August 2 - 5, 2011. In Proceedings of
Balisage: The Markup Conference 2011. Balisage Series on
Markup Technologies, vol. 7 (2011). doi:https://doi.org/10.4242/BalisageVol7.Usdin01.