The Year in eXtyles

Presenter: Liz Blake, Inera | An Atypon Company

Liz: Thank you, Jo.

Yes, as usual, I’m going to begin the meeting by telling you all what we’ve been up to for the past year. So as Jo mentioned, this is the third virtual XUG and the 18th annual XUG overall, for those of you who are new and have only come to virtual meetings, we did used to do this meeting in person for many years and we actually had originally planned to do that this year as well. And for a couple of different reasons, we decided to keep it virtual.

And we did actually reach out to many of you past XUG attendees to see what people’s feelings were about whether they preferred an in person or a virtual event. And it was mixed responses. And we do understand that people miss getting together in person, we do too. It is hard for me if I say something mildly amusing during this talk, not to hear or see laughter.

But there are also some pros, Debbie is clapping. There are also some pros to, to doing the event virtually. Of course, we do get more attendees when we do it virtually. My cat can appear out of nowhere when we do it virtually, and we’re able to provide recordings very easily for the entire meeting.

So pros and cons, we appreciate you still showing up for the virtual event. Will events be virtual or in person in the future? We’re not quite sure what we’re going to do next year. We will keep you posted. It’s likely that we will probably start doing a combination in the future, but we’ll certainly let you know and continue to solicit your feedback.

But the image on this slide is from the beach near my home in Maine. And if you, I know some people are already chiming in, in the chat with where they’re joining from, but if you haven’t, please do feel free to let people know in the chat where you are attending the meeting today.

So in terms of what we’ve been up to, Jenny reminded me last week that I should talk about this very major thing that we went through this year, which is that we became an entirely virtual business this year. Inera is actually 30 this year. So a major anniversary for the organization, and when it started was a little office in Newton, when I started. It was a little office in Newton above a delicious Vietnamese restaurant that we really loved and went to all the time and but could no longer, as the team grew, could no longer contain them. And we moved to a larger office in Belmont.

And then, as many of you know, or most of you know, we were acquired by Atypon, which is part of Wiley about three years ago. And Wiley already had an office in Medford, Massachusetts, which we were in the midst of moving to in March of 2020. Some of you may recall that some things happened in March of 2020, and I actually never even saw the Wiley offices. I’m told they were lovely, but unfortunately Wiley opted to close the Medford office in May of this year.

So this wasn’t a surprise to us when it happened, but it was a little bit of a scramble. We didn’t have that much time to get out of there. And our physical infrastructure, meaning our servers and the like, were moved to New Jersey. So that was a kind of a major challenge for us. But we are now a 100% virtual team, which of course everyone’s very used to at this point. But one thing that we kind of realized recently, which is interesting, is that at this point, fewer than half of this Inera staff are based in Boston. So, while it is still a hub for us, we’re pretty distributed at this point.

So eXtyles, why we’re all here, what’s been going on with eXtyles in 2022?

So for the past couple of years I’ve been doing these sort of stats. I like to go through the master release notes every year and see what the trends are. If there’s anything of particular note that I want to highlight during my XUG presentation about what has changed in the software. And as of mid-September, I did this last month, there were 129 new items in our master release notes since the previous XUG, 38% of those were general improvements in eXtyles, and 62% of those were specific to particular customer configurations and nine were bugs.

So what’s interesting to me, because I review my slides from previous years, every time I do this, is that these percentages are extremely consistent from year to year, but the absolute numbers are a lot lower this year in terms of the number of changes that we’ve made. And that, much like the office closing, was not a surprise to me either because– for a couple of reasons.

The first is that, when we started eXtyles was not only the primary product, but it was the sole product and that’s just not the case anymore. And so the development team’s work encompasses the kind of traditional adjustments you’d see in the release notes, but also holistic work on the entire product family including Edifix and eXtyles Arc. And as you know, these, all of these products share code and share functionality, but the development team is more distributed in terms of the types of work they’re doing. And the work in cloud solutions is something that Bill is going to talk about in more detail tomorrow. So stay tuned for that.

But the second reason why there’s fewer adjustments to eXtyles this year is that we really took a step back to look at our overall processes and infrastructure a little bit more thoroughly this year. The sort of way that we framed it is that eXtyles gets regular physicals and checkups and but this year it’s reached a point where we needed to do a maybe a more thorough examination of the software. When you have something that’s been around for 22 years, you have to be constantly evaluating it and refreshing it and in some cases, replacing things to ensure the viability of the software going forward. So we made a very conscious and deliberate decision this year to focus some of our resources more on ensuring that the technology will continue to thrive for many years to come.

So some of what that has involved has been reviewing the entire eXtyles architecture from top to bottom and also reviewing our deployment and delivery processes to ensure that those are optimized and as robust and efficient as possible for getting the software to you. We have been reviewing and have already made changes to a number of processes, including our automated testing mechanisms. So for those of you who may not be familiar, we do a lot of manual testing as Jo mentioned, every time you get an annual release of the software that’s being tested from end to end to make sure that it’s going to work for you when we release it to you.

But we also do a huge amount of automated testing every night. So every time a change is made to eXtyles, that goes into automated testing to make sure that a new feature hasn’t broken something else. And so that’s been under review, that’s a massive part of our infrastructure that we’re making some updates to. The eXtyles installation architecture, again, is also something that we’re making changes to.

And then just a little third item here that doesn’t affect most of you but probably affects some of you is hardware keys. Certain eXtyles products or installations require a hardware key, which is a USB dongle, to install and run eXtyles. They’re a bit of an inconvenience both for us and for the customer. And so I’m delighted to tell you that we will be phasing them out. So any of you who are using hardware keys and find them a little frustrating. I don’t have a timeline on this, but we decided to commit to it because we all want to phase them out and modernize that installation process for those organizations that are currently using them.

And in the midst of all this, of trying to ensure that we’re modernizing our infrastructure, we were integrating, we have been integrating with our parent company, this is an ongoing process and has accelerated somewhat this year and we want to of course do this while ensuring that there’s no interruption in the reliability of the service that we provide to you. So it’s a challenge and it has been a significant focus for us this year.

Well, having said that, there are a handful of eXtyles improvements that I did want to highlight, reviewing the release notes. As always, there were updates to the journal database, so this is the database that lives inside eXtyles that we maintain and curate. Over 2200 new titles were added to that database this year. There were, I noted there were a couple of improvements to handling of nested tables, which can be very thorny at activation cleanup. So hopefully you’ll see some improvement around that behavior if you get nested tables in your content.

And then just in general, I saw a lot of improvements around recognition of document elements. So the improvements to the automated recognition of different ways authors can refer to or include these elements in their content to ensure that eXtyles is going to continue to do the right thing in an automated fashion. For instance unnumbered objects, access dates in references, resubmission dates, which is part of the article history block. Things like subtleties in the casing of complex author names in references and in author lists. So again, have been built into eXtyles from the beginning, but we do just continue to refine them. There’s nuances. We see different things from our customers as content comes in and authors maybe adjust the way that they’re referring to things and we want to make sure that we’re still catching all of those and improving the automation.

Some highlights having to do with how eXtyles operates within the broader scholarly publishing ecosystem. We added support for BITS 2.1 this year and Debbie Lapeyre, who is clapping, will be giving her overview of what’s new in JATS, BITS and STS a little later today, so stay tuned for that. And one of the things that Bruce wanted me to mention is that BITS 2.1 includes some specific improvements that were requested by eXtyles customers. So it’s not merely an improvement to eXtyles, it’s an improvement to a standard that’s used across the industry. You guys are helping improve these standards and these tag sets and we want to encourage you to continue to do that. We’ve expanded support for identification of dataset citations and I think I also saw some improvements around support for pre-print citations.

We always want to stay up to date with what recommendations are for from organizations like JATS4R or PubMed. So we have included some updates to the eXtyles export module and the deposit modules to support new JATS4R recommendations as well as new PubMed requirements. And then there have also, for those of you who are using this tool, there have also been improvements to the ORCID merge behavior behavior for that module.

On the Edifix side of things. So for those of you who maybe aren’t familiar, Edifix is our web service. It’s the same reference processing functionality for bibliographic references as in eXtyles, but it’s available as a web service and we do have some customers who use both. And the API is used by organizations that implement Edifix or integrate Edifix into other systems and platforms. And the API, new API was released in January, version 2.0 with lots and lots of improvements and new features. And there is a link to the Edifix blog in these slides if you’re interested and if in general you’re interested in Edifix, just please tug on one of our virtual sleeves at some point during this meeting or follow up with us after the meeting if you’d like to learn more about this tool.

Okay, so eXtyles is a plugin to Word and therefore, Office is a major part of what we do. And so I just wanted to highlight a few things having to do with eXtyles and Microsoft Office in particular that we’re working on or that came up this year. So the first one is eXtyles support for 64-bit Office. If you’ve come to this meeting, if you came last year, you know that we were talking about this last year as well. This is part of the overall infrastructure initiative that we’ve been working on. Up until this point, eXtyles has only supported 32-bit Office and 32-bit Word. And as time has gone on, that has become more challenging for organizations who are now by default going to 64-bit Office. So we’ve been working on support for 64-bit Office for some time.

The good news is that it’s going to position us to be much faster in the future with updates, but the bad news is that it has taken us much longer than we originally anticipated to complete this work. So I’m happy to report that I now can give you a date, a firm date, for when we will start rolling this out. So we’ll begin a phased rollout of eXtyles for 64-bit Office on November 30th. So we’ll be phased, we will release to customers in stages, but some of you will start getting this in about a month. And just to be aware, the way that we’ve decided to do this is going forward. eXtyles will auto-detect which version of Office, whether it’s 32 bit or 64 bit, at installation. So that removes some of the burden from you of knowing or having to request a specific installer from us. We will instead release and it will just do the right thing based on what your IT environment is. And so that will hopefully simplify things for everyone, for you and for us, in terms of maintaining the installer.

Having said that, we will eventually phase out support for 32-bit Office, but that’s not going to happen anytime soon, at least a year, possibly longer. And we will provide ample notice. Based on what we’re seeing and hearing, I think by the time we do that, pretty much everyone will be on it. But we will keep you informed and if you have any concerns about that, please do let us know.

This is another Office-related item that just because it’s been sort of a hot topic among Inerans for the past couple of weeks, I thought it might be worth bringing up today because some of you may have noticed this recently, which is some changes to Word comments. Microsoft recently introduced into Office 365 “modern comments,” which is a change to the way Word comments work to allow for more extensive threaded conversations and I think maybe more interactivity and functionality within the commenting to make them more collaborative. But as Bruce pointed out when we were discussing this, every time Microsoft makes a change to the way comments work, it tends to break something for us. And we are investigating this more deeply.

But the first and most immediate thing that we noted was that it has removed the ability to use Word character eXtyles in comment text. So the character, so this, the screenshot that you see in the slide is not one of the new modern comments but one of the old ones and all of that color is Word character styles that are not only color-coding the elements of a reference but are also applying a named style to them to semantically tag them. And this is particularly useful for, it’s a key feature of eXtyles’ reference correction that makes things really easy when there’s a big difference between what’s in the Word document and what PubMed returns for example, will return a completely copy-edited and tagged version of PubMed’s version of the reference and say, we think you should look at this. We’re not going to automatically make the change, but if you’re happy with it, you can just copy it, paste it into your Word file and you’re done.

Like I said, we’re continuing to investigate this, we’ll let you know if we learn anything new. But for now, the best workaround for this because you’re going to get this edited comment back without the color, is to just paste the text in, strip the tags at the beginning and the end of the reference, and rerun bibliographic reference processing, and then the color will reappear. So it’s not ideal, it’s an extra step, but it is a workaround. We have reached out to Microsoft about this, but we’re not holding our breath on them getting back to us. We will see, maybe we’ll get lucky on this one. But in the meantime, like I said, we’re going to do some more testing and there’ll be a technical bulletin or an FAQ or both to come with more information about this.

Okay, so that’s sort of a little bit of a deeper dive into some of the things that have been going on with eXtyles.

And now I want to talk a little bit about Wiley Partner Solutions with some of which some of you may have heard about. This was announced last week in conjunction with the Frankfurt Book Fair. As you know, we are part of Wiley now and we are also part of Partner Solutions, so what is Partner Solutions? So this is a new division within Wiley Research and it is operating completely separately from the research publishing division. So some of you may already be aware that Wiley has acquired a fair number of businesses over the past couple of years and most of whom now fall under the umbrella of this Partner Solutions division. And it is comprised of trusted brands and experts who are working collaboratively to help publishers solve key challenges. We are part of Partner Solutions along with Atypon, eJournalPress, J&J Editorial, Knowledge Unlatched, and Madgex.

So these are organizations that many of, if not most of which may already be familiar to you and you may already be working with some of these organizations. And by the way, while Partner Solutions was a new announcement to the public last week, it is not new to us. We have been working as part of Partner Solutions for some time prior to the the public announcement. So what does this mean for you, eXtyles customer? In terms of your use of eXtyles or your relationship with the team, it doesn’t mean anything. There’s going to not going to be any change to your use of the software or your relationship with us. Ideally this means that you are going to benefit from the fact that we now are positioned within a broader network of solution and service providers, experts in publishing workflows. There are a lot of opportunities for collaboration, there are a lot of opportunities for innovation here. And ideally also I think down the line, this means you’ll have access to a wider range of resources and events beyond events like this one.

So just to give some concrete examples of how this has worked out thus far. One collaboration that we want to highlight is our work with J&J Editorial. So for those of you who may not be familiar with them, many of you probably already are, they are an organization that provides editorial, production and consulting services, and they were acquired by Wiley about a year ago and we just hit the ground running with them right away to get to know their team for them to get to know our team and to see if there were opportunities for us to work together. And not only were there immediate opportunities for us to work together, but opportunities for us to provide creative joint solutions also arose very quickly. So it’s been pretty satisfying for how organic that relation- organically that relationship has developed. And J&J is going to speak more today about this partnership.

But just to give you one idea of a direction that we’re working or we’re moving in is that this is a really good collaboration for organizations who recognize that there are many benefits to implementing eXtyles into their publishing workflow, but for one reason or another don’t want to run the software themselves. So that’s one collaboration that we’ve spent quite a bit of time on over the past year.

Other collaborations, we’ve been part of Atypon for three years. They just recently had their community meeting, which is analogous to XUG for Literatum customers, and they also highlighted some joint customer projects including their and our work with AAAS. So AAAS migrated to Literatum fairly recently and as anyone who’s gone through that kind of migration knows that’s a big, big project and a big change. And the fact that AAAS, the XML that AAAS had was generated by eXtyles and the eXtyles team and the Atypon team worked together really did facilitate the process for them. It made the process to identify or communicate any adjustments to the markup that were new platform requirements. It was simplified by the fact that all of these teams already had a relationship with each other.

And then eJournal Press also, which I know many of you work with, they are part of the Partner Solutions team as well now. And we have been working with them on a couple projects to facilitate workflow efficiencies for shared customers. And we’re also exploring opportunities for technology integration with both EJP and JPS. So I think there are a lot of exciting opportunities for these groups, a lot of whom we already knew and worked with in less formal ways even before Partner Solutions was developed. And so it’s been a very interesting year for that.

Okay, so I said there’d be no change to your use of eXtyles or to your relationship with us. This is true. However, I want to give everyone a heads up that there may be a couple of things, not your relationship with us or your use of eXtyles, but a couple of things that may change over the next year. I won’t be surprised if we have “new” email addresses, and I have new in quotes here because in fact the Inera team has had three email addresses for three years now because we’re part of Atypon, which is part of Wiley. So we have multiple business identities, and it really has become a somewhat unsustainable administrative situation for us. So I included yet another picture of the ocean here to kind of describe how placid and sparkling I will feel when I have one calendar instead of three. And that is, I think a goal that we’re all working towards.

So do not be alarmed, because this isn’t just about us and our comfort, it’s also about you and ultimately us consolidating our business identities is going to ensure reliable and efficient communication with us and happier team members as well. So that’s just, and there’s no timeline for this that I’m aware of, but I think that that’s likely to come. And then speaking of communication, I do want to highlight a couple of other things that are new and different.

Sylvia and Jo who are our primary communicators as the heading up marketing activities for Inera have officially moved into broader roles within the Partner Solutions marketing team. So congratulations to them, well deserved. And still working with Inera because we’re part of Partner Solutions, but working with a broader team of colleagues and customers. Those of you who are Atypon customers probably already know that they’ve been working on all of the Atypon community events and online meetings and also any blog or newsletter communications you’ve been getting from Atypon have Sylvia’s magic touch on them as well. And so they will, along with me, help keep you in the loop on any initiatives and events, again within the broader community that we think might be relevant to eXtyles customers.

And then just also there may be some cosmetic changes to some of the marketing materials from Inera, things like the website and the newsletter I suspect just for brand cohesion there may be some adjustments to them, but again, no timeline on this and I don’t think you should expect anything major. So that’s just an overview of what we’ve been doing in terms of the new Partner Solutions initiative. Inera is a very collaborative organization and always has been. So this has actually been a very natural development for us and has brought a lot of interesting opportunities to our doorsteps, so it’s been fun.

So thank you very much for your time and stay tuned for lots more interesting talks today and if anyone has any questions for me or for the broader team about anything that I covered thus far, I’m happy to address them.

Jo: I’m going to jump in for the Q&A portion here. We did have one question that came in during your presentation by Ron at FASS. Should we continue to look for and correct nested tables before feeding files to Arc? Don’t worry Liz, you don’t have to answer this because Robin was very quick to answer that question in the chat already. I just wanted to let people know in case.

Liz: What was the answer?

Jo: That you should continue to look for and correct nested tables. Oh wait, of course.

Liz: Robin, feel free to jump in.

Jo: Yeah, so if there’s a table.

Liz: Go ahead.

Robin: Shall I describe it, Jo?

Jo: Yeah.

Robin: Particularly what we’ve changed is sometimes you would see, because authors don’t know how to make a table bigger or smaller, they would nest a table inside an empty table. So they would have a box around the outside and then a table on the inside. Our old code was totally oblivious to that and would always detable, or convert to tab-separated text, the inner table. So even if the outer table contained nothing and the inner table contained all the content, we would basically destroy this table structure by turning it tab-separated text. Now it’s intelligent enough to actually have a look and see if there’s anything in the outer table and if there’s nothing or just some paragraph marks or something like that, we will detable that table on the outside and leave the inner table as it is.

Liz: Oh cool.

Robin: So that’s-

Liz: And that’s just Arc, or that’s for desktop as well?

Robin: Yeah, that’s across the board.

Liz: That’s good.

Jo: So that was the only question that came in through the chat during your presentation. You are allowed to unmute so if anyone has a question, feel free to raise your hand and unmute. Speak now.

Liz: I’m now seeing everyone, I’m now seeing everyone for the first time since I’ve had this slide up.

Jo: Yeah.

Liz: So hi everyone, very nice to see you.

Monica: I have one quick question. So just to clarify–

Liz: Hi Monica.

Monica: Hi, so you mentioned that the support for 64-bit will be released on November 30th. We already have a new build and I just want to confirm that that support won’t be integrated with a new build. It’ll be somehow distributed magically in another way.

Liz: Maybe I’ll let Jenny chime in. What I know is that we’re going to do it in a phased way, Monica. So we already have targeted certain customers that will get the first round of releases and then we will go to the next round and so forth. But in terms of meaning you recently got a release, is that what you’re saying?

Monica: Yeah.

Liz: And what will we do for people who have recently gotten a release but might want the 64-bit sooner rather than later?

Monica: Yeah, we don’t necessarily need it sooner, but we do have our IT department build the distribution of the new build. So I don’t want to have to like build it, get a new build and then have them build it again, so.

Jenny: Sure, absolutely. Hi Monica, how are you?

Monica: How are you?

Jenny: Good, good thanks. So we have already started to create a list of sort of urgent releases. So those of you who have reached out to us and said this is a real problem and we need a 64-bit solution sooner rather than later. And so those after November 30th we’ll be sending out and it will be delivered as a new installer. So this is something that you’ll get as a new eXtyles build and then for everyone else who’s like, yeah we’re fine, kind of how we’re working right now, it would be nice but it’s not urgent. We’re planning on making the new installer available during your annual update rollout. So whenever you usually get an annual update is when your next update would include support for 64-bit. So unless we hear from you, and we’ll send more information, we’re going to be doing some rollout planning in the next few weeks. So hopefully all of this will be demystified in terms of when you can expect to receive it. But I mean absolutely reach out to us at support at any point if you know that this is a pain point for you and you need to be on sort of a more urgent release schedule.

Monica: Okay, thank you.

Jenny: Yeah, you’re welcome.

Liz: Any other questions?

Jo: While people think of their questions, for anyone who’s not in the chat, Sylvia’s added a few links to learn more about Partner Solutions and the Edifix 2.0 blog post during Liz’s message. So those links while in Liz’s slides are also in the chat.

Liz: And so I can’t see the chat when I’m talking, but I do see that there was some conversation about the modern comments. Have people seen that in the wild already? Yes, Jenny’s nodding, okay.

Jenny: I mean I have and my list of complaints is a bit longer than what you showed.

Liz: Right, I know, I know. Yeah.

Bruce: It’s worth reinforcing a comment that Robin made that if you don’t like modern comments in general with Office 365, you can turn them off and revert to the old-style comments. However, that does not bring back support for character styles. Microsoft was very clear in a blog post that they cut out a lot of features from comments when they went to modern comments. They want to hear from people about what features they consider essential. So if you consider that feature essential as we do, we will, I believe Jenny is part of that, part of that post would make all of our customers on modern comment.

Liz: Oh yeah.

Bruce: Include the link so that you can ping Microsoft and say, hey, please fix this.

Jenny: I mean I don’t feel so bad about that but yeah, we could do that.

Bruce: That would be great, thank you.

Jo: Monica, comment in the chat.

Liz: So is it helpf– Monica, did you know that you could turn them off? You did.

Monica: Yeah, that was one of the first things that I researched because people, the thing that was, I think the worst about it is, I mean I guess it’s a mixed bag, but the way the updates are, it’s very mysterious, and some people got them, but I didn’t get them for a really long time. So it was really strange how it was rolled out by our IT department. The updates we had like it just kind of was like bubbled up and then I figured out how to turn them off. But then they do say in the instructions for how to turn ’em off, that this, the ability to turn them off will not be permanent. That at some point you’ll be stuck with them.

Liz: Right, yeah.

Bruce: For any of you who do have friends in your IT department, if those friends know people at Microsoft or have any kind of contact with Microsoft, please have them relay the word. I think modern comments are going to be a huge goose egg for Microsoft and they need to hear that from as many people as possible.

Liz: I mean, I laugh a little ’cause it does seem like it might be futile, but it’s not necessarily futile. I mean, Microsoft has walked back things in the past that were unpopular or that didn’t work particularly well. So it’s possible.

Robin: And it’s not just character styles, there’s quite a lot of formatting you can’t put in there as well, just regular face markup.

Liz: So you can do bold and italic and so forth.

Robin: And that’s about it, yeah.

Liz: Anything else?

Jo: Allegra makes great-

Liz: Allegra’s feeling powerful.

Jo: “All things are possible with sufficient social media pressure,” that is very true.

Bruce: Does that mean we can get Microsoft to bring back Clippy? Anyone remember Clippy?

Jenny: Okay, so what are we, 48 minutes in and we had our first Clippy name drop. So good job everyone.

Robin: Yeah, where’s that Bingo card?

Jenny: Yeah, right?

Jo: I know.

Liz: Speaking of 30th anniversaries of things, Clippy.

Granularizing BITS XML for Product(ion) Flexibility

Presenter: Cindy Maisannes, CFA Institute

Jenny: So, hi, everyone. I’m Jenny Seifert from earlier in the intro. I’m really happy to introduce Cindy Maisannes this afternoon. She’s going to talk to us about… She’s going to make me say, “Granularizing BITS and XML for Product Flexibility,” which is a case study from CFA Institute on re-architecting their BITS XML for richer reuse of content. Cindy is the Senior Manager for Publishing and Technology at the CFA Institute, where, for 15 years, she has supported the production of books and journals in digital and print formats fueled by eXtyles. Prior to her work at CFA Institute, she collaborated with faculty, staff, and students to build digital scholarly text projects at the Electronic Tech Center and the Scholars Lab at the University of Virginia Library. So thank you so much, Cindy, and go ahead and take it away.

Cindy: Great. Everybody can hear me, and everybody can see my screen? Hopefully. Scream if not, but-

Jo: You’re all good. You’re good to go.

Cindy: Great. Well, yeah, so thank you for calling out my title, “Granularizing BITS XML for Product and Production Flexibility.” I did have a moment of panic last night where I realized that “granularize” is not actually a verb, but it does appear in Wiktionary, so it has at least that level of credibility behind it. But so, today, I’m going to be talking to you about the background of our curriculum production at CFA Institute and going into a little bit of detail about the old way that we used to do things before, talking about how we’re doing them now, and then a little bit of a discussion of what we did right and what we did wrong along the way. So for some background about our curriculum, we do have several programs of study at CFA Institute. The most well known of these is CFA Program. And as a side note, CFA Program is actually 50 years old this year. We administered the first exams in 1963, which means that the first curriculum was released in 1962.

So the content of CFA Program and all of our other curricula, not surprisingly, was originally conceived and created as a print product. Up until about 10 years ago, that was exclusively the case. The curriculum content is unique in that it’s updated fairly significantly every year by a distributed pool of external SMEs. And because we’ve got such a large pool of external writers, we need an authoring platform that’s easy for everyone to use and doesn’t require additional licensing fees or user training. So having authors share their curriculum readings with us as Word documents has always worked best. We then reproduce the candidate study products from the ground up every year using eXtyles SI to convert to XML and other automated workflows for downstream conversions like Typefi for PDF production and some custom scripts to create EPUB eBooks.

So I’ll show you a bit about the old way we used to manage all of this content. Here, you’ll see a basic breakdown of what a curriculum chapter or a reading contains. And so in 2011, when we were creating our first XML architecture for CFA Program, we settled on dividing content up into individual documents based on two criteria, what kind of content was in the document, and what different parties might have authored the document or be retaining copyright ownership. So we settled on breaking these out into three different kinds of documents. Learning outcome statements were always authored by CFA Institute, but the text of readings could have been written by another set of parties, and the practice problems and solutions could potentially have also been written by a third set of parties. Each of these sections was created as its own Word document and was converted into its own book part in NLM Book 3.0 XML.

So within the learning outcome statements document, all of the individual LOS, or learning outcome statements, that were covered in that reading were presented in one long list. Literally, those items were tagged as a list in the XML, and the only distinction from other kinds of lists was a list content attribute specifying that this was an LOS list. The reading itself could be quite long, and at the end of the reading Word document, we would have glossary terms from the reading, alongside their definitions. Readings could be up to like 100, sometimes longer, pages, followed by all of the glossary terms, and then a third document included all of the practice problems and solutions, which were presented at the end of the reading in the print textbook. Like the LOS, these problems were all presented in list format in the XML, followed by another list of all the solutions. And we worked with Inera for a really long time to try to find a way of linking individual problems in the list to the solution that corresponded to it in the following list, but because the content was so complex and actually included other lists as part of the questions themselves, we were never able to find a reliable way of creating those links. So for what we were doing with this curriculum content at the time, this breakdown of content and the XML tagging worked well. There were some minor quirks and workarounds for specific content configurations that we would have to deploy at times, but in general, it was pretty straightforward and useful. So our CMS and production system architecture at this time was a customized version of SharePoint 2010 that we powered to run all of our automated production workflows. We’d have curriculum source files maintained as Word documents in SharePoint 2010, and then those would get sent out to eXtyles SI for conversion to XML, and then they’d get imported back into SharePoint, and the XML could then be workflowed out to Typefi for conversion to PDF and imported back into SharePoint.

So that system worked well, but it did have some limitations from the beginning, one of which was that SharePoint could only store XML as full documents. We couldn’t process sections of the XML files separately, and we couldn’t search within the XML. In 2013, we added the ability to convert curriculum XML to EPUB, but SharePoint couldn’t be updated to run those scripts through automation, so creating EPUBs required us to download all the needed XML and images out of SharePoint and process everything locally. And then finally, we launched the curriculum as courses in an online learning ecosystem hosted by BenchPrep. So to build a course, we would download all of our XML and images and send it all to BenchPrep, and then BenchPrep would convert those into an HTML course using a combination of scripts that they had written and just a lot of manual work. So over time, we began to see the need to update the system for a number of reasons, frankly. The most limiting factor we were experiencing was that SharePoint 2010 was reaching its end of life. We really rode it out to the bitter end with SharePoint, but it was so old, and the BizTalk functionality that moved content back and forth was so fragile that it was broken all the time, and our IT Department couldn’t support it anymore to even make minor updates. So this meant that we couldn’t upgrade any of our component parts of the system either, so eXtyles and Typefi were also effectively frozen. Whenever we’d get a new maintenance build of eXtyles SI from Inera, we’d end up just shelving it, because we couldn’t install it on the server.

Another issue that we wanted to improve in a new system was including EPUB and Learning Ecosystem HTML courses as formats that could be produced through automated workflows within the system, without having to download XML and images to process locally. I think if you asked any of our Production Team, they would probably say that it wasn’t too bad to have to run all of these scripts locally, but it was somewhat inefficient, and it was a little risky. People would assume that they had the most updated versions of the files already saved on their local machine instead of going through the hassle of downloading from SharePoint again, and I always got a little nervous about updates to files in SharePoint getting overlooked and lost.

Another inefficiency was the creation of the glossary within each program level. Glossary terms were maintained within the individual reading Word documents where they were introduced, and sometimes there would be instances of the same term getting defined in multiple readings, all with slightly different definitions, but there was no way of addressing that within the terms… With the terms only existing inside the reading XML files and SharePoint, which we didn’t have any ability to search for using text or Xpath, so we’d have to check for duplicate terms when the glossary for the whole level was created and then go back to SMEs to dedupe the glossary for us at that time.

These next two items were really huge in our need to update our workflows. When our first curriculum program was converted from print to digital course, it needed to be broken down into shorter lesson-length sections for presentation online. We worked with SMEs to create these chunking spreadsheets, which took the print readings and dissected them into often very messy divisions of content. We’d have to tell BenchPrep, “Okay, learning outcome statements 1 and 3 from this reading are covered in sections 3.1, 3.4, and 3.9, and they should be linked to practice questions 6, 7, 11, and 13,” and then BenchPrep would have to take our XML and slice all of that up manually as they were creating our courses. We hated it, and they hated it, so we needed to find a better way. And lastly, there was a huge inefficiency happening with our Exam Development Team responsible for the exam questions that test the curriculum content. They were frustrated that the curriculum was updated every year, because we had no way of flagging to them what was changing in the curriculum or where or how those changes might be connected to exam questions in the question bank. And so they wanted to know, if something changed in the curriculum, do we have any exam questions based on that specific piece of curriculum that would need to be reviewed to ensure whether or not the question was still valid? And because the curriculum was reproduced every year with no persistent identifiers in SharePoint to associate last year’s sections of content with this year’s, the Exam Team had to do a complete remapping of the curriculum from scratch every year to compare against to their question bank.

So we re-architected things, and this is the new way we’re creating content, which is now tagged in eXtyles as BITS 2.0 XML. Instead of print readings that were divided up into three long sections of learning outcome statements, reading text, and problems and solutions, we’ve now got all of these individual components being tagged as their own book parts that get saved separately. Readings have now been broken down into multiple shorter lessons, each of which is linked to the LOS being covered in the lesson and the individual practice problems and solutions testing that specific content. LOS have shifted from being a full list of multiple list items to being stored as individual learning outcome statements that are tagged as book parts. Practice questions are also no longer tagged as full lists of all questions associated with the reading, but are saved now as individual Q&A pairs, tagged as book parts, and associated either with individual lessons or with the full learning module itself. And instead of storing glossary definitions at the end of readings in which the terms are introduced, we can now save individual glossary terms and definitions directly in the database, which makes deduping easier to manage as we load content into the CMS.

So the gray boxes here indicate the division of how all of this content is managed as Word documents. A Word file contains these component parts in a specific order with LOS first, followed by the lesson text, then the glossary terms, and then problems and solution pairs. And RSuite, which is our new CMS, parses the components out in the proper locations in the database upon upload of the XML. Our updated eXtyles build allows us to tag problems and solutions using the BITS question wrap tag set, which means we can finally link questions and answers together without having to try and go through complicated logic of matching list items in order. And all of this tagging and storage in the database means that we’re now finally able to deconstruct the curriculum in various ways to be able to repackage it differently.

So this is the CMS we’ve migrated to. It’s RSuite, which is structured on an XML-aware database, and that lets us save content and metadata at more granular levels than we ever could before. The new curriculum source of truth is the XML in the database, no longer the Word documents themselves. And so you can see at the lower left of the diagram that RSuite has integrated oXygen Web Author as well as eXtyles SI for us to be able to update content. Minor updates can be made directly in the XML through oXygen Web Author, and more significant updates are made by exporting XML to Word through a custom workflow created for us that tries to match the XML tagging to eXtyles Word styling. Our production workflows are all automated and able to be run out of RSuite directly without having to download XML or image files, so Typefi is integrated for PDF production, the XSLT to create EPUBs is also integrated as a workflow, and we’re now able to run our own workflow out of RSuite to create the HTML package for courses in the Learning Ecosystem on our side without having to get BenchPrep to convert to HTML themselves and divide up content using the old chunking spreadsheets. This workflow in particular saves us a month of production time for each course we deliver in BenchPrep, so it’s been a huge boon for us already.

We’ve only really been working in RSuite for about a year now, so we’ve still got lots of systems we’d like to connect and new functionality that we’d like to add. We’d like to be able to connect RSuite to our enterprise web platform, Sitecore, to be able to push out curriculum content to the web as refresher readings for our charter holder members, and there could be other delivery platforms down the road. We’re also very interested in being able to integrate a taxonomy classification platform to help us do more robust reporting about curriculum content stored in RSuite. This would let us hypothetically pull together new content collections about specific topics that can be used as the basis for professional learning courses. And lastly, we’re now set up to be able to report back to Exam Development about what is different in the curriculum year over year. The book part content stored in RSuite has all the persistent identifiers associated with it in the database, so with the future integration of a comparison engine like DeltaXML, we could flag what’s changed in the curriculum and potentially link that to the exam question bank to flag questions based on that changed content.

So in terms of what we may have done right, I think there have been some benefits for product flexibility, specifically for candidates. Each year when production takes less time, that means we have more time to be able to give back to the content authors and reviewers and copy editors and exam question writers, so curriculum updates and exam question creation is able to have more time for development. And it helps to ensure that the curriculum is as recently updated as possible before it launches to candidates. So instead of us needing to take a year and a half to do production, and content having to be fossilized at that point, a year and a half before it launches to candidates, we can now do that faster. Automated production also means fewer human hands potentially introducing errors into content, which is always good.

And finally, customizing output formats is easy. So for example, print can include all of the Q&A together at the end of a reading, while the LES version online can have individual Q&A pairs linked to the lesson in which that concept is covered. In the future, we’re looking forward to being able to stand up new products and content formats much more easily using curriculum content out of RSuite. I will say that this is currently untapped potential of this new system and content architecture, but there’s increasing interest in it across the organization, so stay tuned.

And lastly, again, we can finally track persistent locations for pieces of content in the curriculum year over year. I won’t say “problem solved” for our Exam Team now, because the workflows needed to track changes still need to be fully defined, but we have the technology infrastructure to be able to accomplish this project, finally. On the production side, I think the benefits of our new system and content architecture are even more elegant.

First and foremost, digital output formats were baked into the system, and all of our workflows, and how they were conceived. This is, of course, a benefit to candidates too, since content is being conceived now as a digital product first as opposed to a print product that gets retrofitted to online delivery, but also, on the production side, with the content being tagged and stored as these more granular pieces, we can manage it a lot more easily. We’re not limited by having to deconstruct these 100-page-long behemoth readings into something smaller, which we were rarely able to actually ever accomplish before. We don’t have to download content to process it as a regular course of production now, so that’s good. Glossary deduping also happens in a more agile fashion at the time of upload to RSuite, eliminating a downstream bottleneck with duplicate resolution having to get fed back to SMEs as a waterfall process, and it powers easy, robust reporting. So information about the curriculum can be found via XQuery, and exported to Excel and manipulated there. It’s incredibly easy to move individual pieces of content across locations in the curriculum now, too, without having to edit the content out of one or more Word documents and add it back into other Word documents. We can just change the location and the linking in the database.

And lastly, we can now compile curriculum product formats at different levels of granularity, so a single lesson or a group of lessons together in a learning module or a topic, all of these have objects like figures and tables and text boxes and equations and footnotes that previously would need to be renumbered manually by an editor if we wanted to combine a different configuration of products together. But now we have all of that numbering being handled on the fly when the content is exported out of the system, so that makes it just a whole lot easier for us and for our Editorial Team.

We definitely made some mistakes along the way. First and foremost, we’ve learned not to let systems fossilize, waiting for the perfect time to upgrade them, because that time never comes. Lesson learned. We formerly had a rule that we would never contemplate upgrading any component piece of our production system architecture while we were doing active production of a curriculum product to try to ensure that we would never encounter unexpected system downtime during production, but unfortunately, that just meant that we never upgraded anything, since we were always in the process of producing something. And especially now that we’re producing more products than ever, we’ve just had to get comfortable with this risk for the overall health of our systems.

Our next mistake, I think, has been not staying in extremely close communication with our Curriculum Development Team in particular as we’ve been building the system and our XML content architecture. So almost as quickly as their team told us what content structure they would be using for their new digital learning module format, and we went off to build that structure in eXtyles and RSuite and Typefi and BenchPrep, the SMEs independently began to tinker with ways to change that so that what we built was no longer entirely accurate. We’ve had to alternately update our architecture sometimes, and sometimes just tell the SMEs that, no, they can’t do something they planned to do, and that gets frustrating for everyone.

Also, I think we made some bad assumptions about product formatting based on partial information from only one user group. So the list of learning outcome statements at the beginning of a curriculum reading or learning module used to be formatted as a list with alpha-lower labels, which was how the Exam Dev Team did their mapping of the curriculum to exam questions every year. They would refer to the 2023 Level I curriculum, reading 27, LOS A. And when we were able to assign persistent identifiers to the LOS for the Exam Team to refer to, we thought, “Great,” and we eliminated those alpha-lower labels in all curriculum formats for candidates too, only to discover that there are many other ways in which those LOS labels were getting used, and we didn’t know, including for candidates reporting errata in the curriculum, and Prep providers, so we made a mistake.

Lastly, I think we’ve still got a bit of figuring out the best way to track certain metadata information in RSuite and in our XML. Right now, there’s some metadata and semantic tagging and XLinks to other files that are tracked in our source XML files in RSuite, but because this information in the XML was basically invisible to users outside of our Production Team, we created metadata categories in RSuite that let users be able to access it without having to try to parse through an XML document. Unfortunately, we realized that it had become possible for the XML metadata to be updated when the RSuite version of that same metadata wasn’t. And thus, we’ve either got to remember to update the metadata in two places now when changes are made, or we’ve got to figure out a better way of doing this going forward. Now, I’m sure those are not the only mistakes that we made along the way, and I know we’ve got a bunch of people from my team on the call, so I welcome them to chime in with more, or to correct me where I’ve been wrong, but otherwise, I’m happy to entertain questions.

Jenny: Thank you, Cindy. That was a really… I echo Debbie in the chat, “What a great case study.” And there are a couple of questions, so I’m going to go ahead and start with question number one, which is actually a couple of parts, and so I think I’ll start with the first one, which is, you talk about, now, how you have this modular ability to take pieces of your lessons and move them around, and the question that came in was, “How do you put your content together,” right? So, how do you take these pieces and put them together? And then the part two of that, which is related somewhat, is, “Why choose book part and not book part wrapper “when you were deciding on your markup model,” if that makes sense, so I’ll go ahead… And Stacy, this question is from you, so if you want to chime in to clarify further to follow up on that, please feel free.

Cindy: Yeah, and I’ve already forgotten the first part of the question-

Jenny: Oh, sorry.

Cindy: I apologize.

Jenny: How do you concatenate the parts? Yeah, the bits that… Right.

Cindy: Right, so the individual documents are created as these lesson-length files, and then the content gets disassembled into LOS, lesson, blah, blah, blah in RSuite. When we do the recompilation process into PDF or LES or another format, it follows the order of the content in RSuite. So the content is all set up in the order that the curriculum should be compiled for that year in terms of the actual lessons themselves, and there are XLinks within each lesson that link out to the LOS that go at the beginning, or the problems and solutions that go at the end. So that’s the manifest of how all of the content comes together, which is by a combination of just, what’s the order in RSuite, and what is the logic rules of LOS go first, followed by lesson, followed by problems and solutions. Um-

Stacy: Oh, sorry. I was just going to give context to my question, is we have a lot of integrated resources with editorial teams, but different authors for all the parts, and they’re getting updated at different times. So that’s why book part wrappers work best, and not having a manifest where we have to update and process the book all at one thing at one time. So that was just some background to my question, but maybe you don’t have that use case.

Cindy: No, I think we probably do. It is one issue that we struggle with just in terms of having a more waterfall production system than we would like, where we have to wait for all of the content for, let’s say, a print volume to be completed before that volume can be produced at all. And so we’re always looking for ways to try and make that more agile and less waterfall, but I have to admit, I don’t think we considered book part wrapper as an option to be able to handle that. Perhaps we should have.

Jenny: Thank you. So Bruce has a question, but he’ll ask that.

Bruce: Yeah, I’m going to come off mute. As the content grows and changes over time, what’s your process for maintaining the integrity of the links? So for example, if one lesson gets so big, it’s split into three, how do you link to the right part? Or if some content becomes totally out of date and is eliminated, how do you make sure you don’t have links to it?

Cindy: Yeah, that’s… It’s been a challenge to really figure out the specific rules around this, as… Bruce and I talked about this project for years before we ever undertook it, and this was always the same question that we came back to of, how do you decide when something’s new versus just updated, and how do you manage all of that linking? And it’s been a huge project that we’re undertaking this year, in fact, because in the Level I of CFA Program curriculum, they’re moving a significant amount of content out of Level I, and so we’re having to update all of that linking for just a huge section of content this year. If it’s literally just a one-to-one swap, it’s not a big deal. We can make that, and RSuite updates all of the identifiers to link to the proper places. But where there are one lesson splitting into three, I think more often than not, we probably deprecate the old one and institute three new lessons and just reestablish the linking, which, we’d love to think there are better ways of doing that, but we just haven’t found them yet. So it’s a little more manual than I think we would ideally like.

Bruce: You have any kind of script that would automatically identify dead links before a publication? Or have you thought about that?

Cindy: Well… So yeah, actually, I think Trevor Hiblar is on the call, and I’ll encourage him to unmute himself if he can help me out with this. He’s our Content Architect, so he’s in the weeds with this every day.

Trevor: Yeah.

Cindy: I think we would get errors. Right?

Trevor: Yeah, so if there’s missing content, we have workflows that indicate missing content. Some missing content can be ignored for a test run if you’re trying to preview it, such as images. We can just say, “No, go ahead. “Produce the PDF so I can preview this.” If there’s missing content like questions or LOS, it’ll actually error and tell us, “Hey, you need to go fix this.” And so this is… One of the strengths that Cindy mentioned was granularizing this. We like to run a lot of the lessons and the learning modules before we run the entire book. And so this is a way that, as we work through the entire content, we can check and see what’s actually ready to go on the fly, and we can find those dead links before we get to the deadline and run the whole thing.

Bruce: That’s very cool, thanks.

Trevor: Yeah.

Jenny: I know that we are at time, but there has been a couple of questions that are around this idea of BITS as an appropriate model for this re-architecting that you did, and did you find generally that it was a flexible enough model to handle the content in the way you wanted it managed, or did you find any challenges with this transition to BITS?

Cindy: Yeah, we didn’t have any challenges. I think the one thing that came up initially as a recommendation from ORBIS was that, rather than use for LOS, and problems and solutions specifically, I think, they wanted us to customize BITS to effectively create copies of , but to relabel those as specifically LOS and problems and solutions. And we elected not to do that, because it seemed like it introduced unnecessary complexity that wasn’t really adding much, but there are ways in which it would be easier to display content in RSuite specifically. But in terms of the actual tagging model, I don’t think we’ve had any concerns at all.

Jenny: Very cool. Yeah, shout out for the Q&A tags in BITS that you all use-

Cindy: Quite heavily, yeah.

Jenny: Okay, any other questions before we wrap this up? I know Robin has a really good question that I think everyone at Inera is also interested in hearing, which… And maybe we’ll talk to you about this offline, Cindy, is if there were any particular issues or challenges you’re running into with taking BITS XML back into Word, so that’s certainly very interesting, but probably worth its own session, right?

Cindy: Also still a work in progress.

Jenny: Exactly. Well, thank you so much.

Markup Guild: What We Talk About When We Talk About Tags

Presenter: Joni Dames, Inera | An Atypon Company

Robin: So while Joni gets set up, she needs no introduction. But Joni Dames is my fellow senior solution architect with Inera, and many of you will know her well already. And she’s going to talk to you about the Atypon Markup Guild. I should say the- it’s not true that the Markup Guild has a secret handshake, and we also don’t have our own guild hall, which I think I might have to log a ticket about that. We probably need one. But I was just looking up about guilds on Wikipedia, and apparently they were scrapped, one reason they were scrapped, because they hindered technological innovation, technology transfer, and business development. So, maybe that’s not a good sign. But anyway, Joni, I’ll hand over to you.

Joni: Thanks for that auspicious opening. Okay, so I’m here to talk about Markup Guild, which is a sort of new thing we’re doing this year. It’s a couple of people from Inera, me and Robin, and a few people on the Atypon side. And here we go. And really the reason Markup Guild exists is because your XML does not exist in a vacuum. Your XML goes places. It doesn’t just sit on your computers and not have to be found, or sent anywhere, or discovered by anyone.

So, there are a lot of things that determine the kind of markup, the kind of tags that you use for your content, right? You have the DTD you’re using, JATS, or BITS, or STS, or a custom one. There’s DITA, there’s HTML, there’s txt. If you’re using eXtyles, we create lots of different formats. We’re pretty heavy on JATS and BITS and STS, but we do work with all these other formats as well.

And how do you choose which DTD use? And also what flavor of the DTD use? How to tag any given component in your document while you’ve got the semantic meaning of the things in your file.

And you might have some formatting needs for what you’re doing with your content.

And then there might also be some vendor requirements for who’s going to ingest your content, who’s going to work with it, where it’s going to go. So, all of those things might determine how you need to tag your stuff. But sometimes it’s also about the semantic meaning versus the formatting needs and versus the vendor requirements. These things can be in conflict with each other, which can make it challenging to decide how to tag your content.

So, in an ideal world, for the XML purists, at least, you would have everything tagged semantically, right? You have just the right number of tags for the content to determine what it is, you don’t have any extra tags, and it all renders beautifully, and it’s accessible to everyone, and everywhere you send it they render it exactly the same way, and it looks exactly the way you want it to everywhere it goes and there’s no problems.

And in the real world, these are some of the questions that have come up when XML meets various platforms.

Like, “Where did my tables go?” A real question that came up once.

“Who are these contributors?”

“How should they be tagged?”

“What are these people? What is their relationship to the document?”

“Why do you, why did the affiliations look like that? What went wrong?”

“Why isn’t my funding information showing up over here?”

And, “What even is this thing?”

There’s a seasonal graphic for those of you who are familiar with “Nightmare Before Christmas.” If you’re not, it’s a musical.

So, if you’re looking at some flatware or cutlery, or how– whatever you want to call a couple of spoons and a knife and a fork, you might need a way to identify that, if you are sending it somewhere else. And if you’ve ever moved, you’ve probably wanted a way to find the things once you’ve moved. So, when you’re moving, you pack up your boxes just like you would pack up your document and turn it into XML and send it off to a vendor, to a database, to your typesetter. And if you’re much better at moving than I am, maybe you actually were able to find everything you wanted when you got to your new place and started unpacking. Or, maybe you’re like me and you started unpacking and realized that you don’t know where your plates are or where your cutlery is, and you can’t find your fork, and you can’t find a pan, and you can’t cook anything, and that’s why we’re all ordering pizza on moving day. Or the food of your choice that was not, does not require forks or plates.

So, here’s a picture of some completely unlabeled moving boxes, which is, I think, how a lot of us, or at least me, moved the first time you moved.

And then maybe the second or third time you move it has occurred to you that maybe there’s a better way to do this. And you label some of your boxes so you can find some of your things.

And then, seriously, I discovered this product while I was putting together the slides for this talk. I did not know that this product existed but apparently there is a company that makes little color-coded labels that identify not just, you know, the kitchen or a bedroom, but like which bedroom? The second bedroom, the third bedroom, and the type of thing that goes in that room. This is the box with shoes, this is the box with toiletries. And then also there’s some cool attribute tags on some of these boxes saying that they’re heavy or they’ve got fragile stuff, like the kitchen glasses box. If you really like attributes, that’s fun.

So, ideal tagging versus functional tagging. Again, especially if you’re maybe handwriting stuff on your boxes, you don’t want to itemize every single thing in your box, just the way we don’t want to tag every single noun and verb and sentence in our documents, probably, unless that’s fun for you.

And sometimes we can’t tag everything exactly the way we want to. Sometimes there are conflicts between the semantic meaning of something, and the formatting needs, and the vendor requirements, or someone is just being really stubborn and really insists on tagging something a very specific way. Not calling anybody out here, but- Certainly nobody has ever been very stubborn in our line of work.

And sometimes there really are just multiple equally good ways to tag the same information. And sometimes there’s no good way to tag something yet, in which case you should send your content and what it is to whatever relevant committee controls that DTD. Certainly JATS, BITS, and STS are always open to new sample documents of things that cannot be correctly tagged in those DTDs.

And that takes us back to, what is this? I would call it a fork. But sometimes you’re an animated character in Disney’s Little Mermaid and you have limited experience with humans, or you know your own experience with this thing, and it’s a dinglehopper and you use it to comb your hair. So, maybe, sometimes you’re a mermaid in an animated film and you’re packing up your moving box and you put your dinglehopper in your toiletries box instead of in your kitchen box. And somebody unpacking boxes is going to have to deal with that.

Which gets us over here to, “Sometimes you have to work with people and systems that are using information in ways you didn’t expect.”

And again, this comes into play when we encounter documents that are structured in ways we didn’t anticipate initially, or people using information in ways that we didn’t anticipate.

So the Markup Guild exists so that a whole bunch of us can come to a consensus and determine the best way to tag content and review new documents, and the content for new clients, and to make sure that if there’s a question about how something should be tagged, or if something is being tagged incorrectly, that we can come to an agreement on what is the best way to tag it. And also to be a line of communication between, you know, the groups within Wiley. So, for Inera and Atypon, for content at Literatum, for content being exported by eXtyles.

And also so that we can have that open line of communication with all of those committees, like the JATS, and BITS, and STS committees, about what people really are doing with their content in the real world. And so that we can also know what’s happening in those discussions, so that there’s a little more back and forth. So, some of the things we’ve talked about recently have been making sure that correctly tagged XML is being sent to the Literatum team at Atypon does get rendered correctly. And so that we can be a unified voice saying, “Yes, you have to fix your rendering if the XML is correct.” And again, also just making sure that we do have a consensus about what constitutes correct tagging. Because sometimes there is that disagreement.

All right. I think we’re at time, so let me know if you have any questions.

Robin: Yeah, thanks, Joni. I think we just have a couple of minutes for questions, if…

Stacy: Hi Joni, I have one question. It’s Stacey. Do you follow JATS4R recommendations as part of your Markup Guild?

Joni: We are aware of JATS4R recommendations as part of Markup Guild, and when we choose not to follow them it’s because there is a reason that has been discussed in the Markup Guild. But yeah, JATS4R does come up a lot. Full disclosure, I am on the JATS4R accessibility committee that’s actively working right now on accessibility recommendations. And so one of the things that I wanted to talk about in the last Markup Guild meeting was stuff related to what the accessibility committee was talking about, so that I could bring that background and that expertise from within Atypon, and all of the stuff that they see, over to that committee.

Stacy: Right, very similar to the PMC Style Checker and JATS4R relationship it sounds like.

Joni: Yep.

Robin: So Joni, Lauren asked, “Does the Markup Guild have a relationship with PubMed, Crossref, other indexes?”

Joni: Not officially? I think not really beyond the sort of open communication that we have with them here already. Lauren: Could I just, yeah, I was just curious because we’re

Lauren: there’s a particular problem that we’re having with a particular type of content rendering correctly when we send downstream to PubMed and, you know, we might be calling on you guys for some help. So, thank you. Bruce: We are always happy to join in multi-party conversations like that, to try to work out optimal markup. It’s certainly been done many times before.

Joni: All right, I’m going to stop my sharing.

Robin: Well thanks, Joni, that was great.

What’s Happening in JATS, BITS, and STS in 2022

Presenter: Debbie Lapeyre, Mulberry Technologies

Sylvia: Okay, so our next session is “What’s happening in JATS, BITS, and STS in 2022” with Debbie Lapeyre. Debbie is VP of Mulberry Technologies, which is a consulting firm that specializes in helping clients toward publishing and documentation solutions through XML, XSLT, and Schematron. She’s also involved in JATS, BITS, and NISO STS. She’s taught hands-on XML, XS- XSLT, DTD, and Schema construction and Schema courses, as well as numerous technical and business level introductions to XML and JATS. If you’ve been to XUG before, you know Debbie. And her hobbies include birding, pumpkin carving, parties, and many too many books. Take it away, Debbie.

Debbie: Okay, thank you. I’m bringing you the- Well, I think of as the annual update to JATS and BITS and STS, and I’m going to do them individually. But first, let me warn you about this talk. I only have 20 minutes, plus 10 for questions, and I could talk about JATS or BITS or STS for a week. Notice I said “or” there, and you have asked for “and” in 20 minutes. So, I realize I have a mixed audience here. Some of you are really interested in the geeky details, and some of you aren’t. So, there are going to be a whole lot of slides that I don’t talk, all right? They’re labeled. The idea is XUG is going to put these up, and if you wanted to see what they said, you can go back and catch them from XUG, got it?

Okay. JATS, the first one. Journal Article Tag Suite. JATS 1.3. Z39.96-2021. I told you last year, it revved. It did. We have a brand new standard published. We have new DTDs, RNGs, XSDs, and a redesigned Tag Library. If you haven’t seen it in a year or two, it’s worth going to look. But, what’s happened since is a lot more interesting. We’re working on two things simultaneously. We are working on JATS 1.4, which is the next regular release. And, we’re working on JATS 2.0, which is a non-backwards-compatible, “we can make it what we want it,” release.

Okay, but the interesting thing we’re doing, I think, is that it has occurred to us that the JATS 2.0 meetings are coming up with some dynamite ideas that are not non-backward-compatible. We just didn’t talk about them as part of the regular JATS. So, we’ve gone through the recommendations for 2.0, mining it for what we think could go into 1.4. And that’s what we’re discussing now.

All right. Some changes already recommended for JATS 1.4. We want to make permissions repeatable in more places. The French have pointed out to us that French Canadians and English Canadians both have permissions both in the same document. Whether the document is in English or French or both, the permissions are likely to be in two languages. So, it needs to repeat, and it needs to have a language attribute. Thank you, Canada. We heard you.

MathML 3 committee has come up with new modules. Use them. We’re using the, I don’t know, 2004 set, something like that.

ALI, Access and License Indicators. The ALI people have also come up with a new recommendation. Use it. It’s got at least some new attributes. Now, that’s what we’ve agreed to, and that’s all we’ve agreed to. But, here’s part of what we’re discussing. Crossref has pointed out that institution is all very well, but sometimes, your institution is a university that has a campus, that has a department, that has a laboratory, and you want to talk about just the department or the laboratory. You need a way of breaking down those units. We have had a request that <anonymous>, which is currently an empty element in publishing, have content because they’re using it in some very clever ways for anonymous peer review. And, we’re discussing that. I think it’s clever, and we’re adding <legend> from BITS. I’ll talk about that when I get to BITS.

In Draft. This is one of the most exciting things, but I’m not going to talk about it. It’s complicated. We have decided that there is a world out there doing multilingual documents. The Canadians have told us that in French and English. SciELO has told us that in Spanish and English and Portuguese. And then, the European Union people said, “Well actually, we’ve got Romanian and German and-” Okay, we get it.

So, we want to be able to care for a journal article when the whole thing is in two or more languages or substantial portions are in two or more languages. I have this section in French followed by this section in English, and the idea is if I could handle them, I could say, “Look, show me the whole thing French and English mixed up together.”

Or, “I don’t read English. Show me only the French document. Pull that out for me please.”

“Do all the searching only in English because if you find something in the French and bring it back to me, I won’t be able to read it. I don’t read French.”

That’s the goal, and I think we’ve done it. It’s in two pieces. Multilingual metadata. Basically if it’s already there, you just ask it to repeat. I want an abstract in Romanian and an abstract in Greek and an abstract in Hungarian. Fine. I’ve got three abstracts. They have language attributes. All metadata needs to do is repeat. Text, however, is a little more interesting. Textual structures, if I have alternate sections in French and English or Greek and Romanian or Spanish and Portuguese, they don’t have to be co-located. A single language, you may not do all of the Portuguese followed by all of the Spanish. You may intermingle them, okay? So, no kind of wrapper element would work. And, we’ve come up with an attribute solution that I’m not going to tell you about, but it’s here.

Okay, next topic. “Changes to Collaboration.” Shout out to Joni here. She said that we’re talking about how elements ought to be used. Well, <collab> is such a perfect example of how elements ought not to be used. When <collab> was created – and I know, I was there in 2002 – what we thought it was was the name of the collaboration, okay? Moose and Squirrel Department, all right? Well, people put all kinds of junk into it. They put lists of contributors, and they put addresses, and they put footnotes, and they put – You name it, they put in it.

So, the proposed solution that we’re going to be discussing is to deprecate current <collab> and <collab> alternatives altogether and create some new elements to do what the old elements should have done. <collab-name> will be what <collab> was supposed to be. The name of the collaboration. All you want is that little string that’s its name. You’ll be able to find it. And <collab-wrap> will hold all those other things, like the 120 participants that you thought was a good idea.

I’m excited by this one too because … “Citing Standards.” We obviously made up how to cite standards out of whole cloth in JATS in the early 2000s. And we did a terrible job of it because we had no idea how standards were cited. But then, STS came along and they said, “We’re the standards people, and we know how standards are cited, excuse me.” But, it’s done with something called the “Standards Designator.” And that is just a string, but it includes the originating organization, the standard number, and the year. And they have called it “<std-ref>.” The suggestion that we’ll be discussing is that JATS add <std-ref> to citations. I hope we say “yes” to this one.

Supplementary material has always been a problem. And we split off a subgroup on supplementary material to talk about it, and they came back and said, “Look, there’s no such thing as supplementary material. You just have figures, all figures.” And, some of them you say, “By the way, I’m supplementary.” You don’t make any difference in how you tag them. They’re just there. Similarly tables, similarly sections. And that would deprecate the current supplementary material metadata elements, which are <supplemental- <supplementary-material>. Oh, interesting. That should be <supplementary-material>. And the thing I find intriguing is I fixed that last night, but there it is, wrong. <supplementary-material> and <inline-supplementary-material>.

Okay, attributes also are going to change. @xml:lang has been added to <permissions>. I told you about that. We need the French permissions and the English permissions in Canada.

We’ve added some vocabulary attributes. We have a new value “interview” for @article-type.

And an interesting one, particularly I hope for Cindy, @question-response-type used to be a fixed list, and that’s nice, but there are lots of @question-response-types that aren’t there. So, we’re adding the value “custom” to this.

At this point, I’d like to take an aside because you may not know how the custom value works, all right? If a schema gives a list of values, all right, it’s in the DTD that your color is red and blue and green, and therefore, you can say <ribbon color=”red”> in your document. There’s no way to add another value like chartreuse or purple or yellow without changing the schema. And you people ought not to be in the business of changing DTDs and schemas. So we’ve given you, for any fixed list, a new value “custom”. So, the DTD says the values are now red or blue or green or custom, and there’s a @custom-type attribute. And you can see here at the bottom of the page how that works. I have a ribbon with color custom. I have a custom type that tells me, “Oh yeah, by the way, that’s purple.” This will allow you to have a fixed list because most of the time, we are red or blue or green. But, on those rare occasions when we need to be yellow or chartreuse or purple, we can do it. And, I’m hoping, Cindy, that you find this useful for your question types.

More attributes. Article types have changed a lot since 2002. We have data papers and collections and protocols and community comments and expressions of concerns and Jupyter notebooks that we don’t even quite know what to do with yet, but we have them. And so, we’re looking at expanding the suggested list. It’s only about 20 items now. We’ve got more than 50 to discuss. We may not end up taking all of those, but you see the direction. And that is what’s happening in JATS.

The exciting part from the point of view of 500 feet is we are trying to bring into the next release all the cool stuff we came up with for 2.0 that will fit, ’cause some of the cool stuff in 2.0 says, “Get rid of these six elements. Replace it with this one.” And that’s not backwards compatible. We can’t do that.

New topic. Yes, still on track.

STS- Oh, this one is exciting, people. I actually have something to announce. STS, for those of you who don’t know, is the JATS family for standards. It comes in two flavors, Extended (XHTML and CALS tables). Interchange (just XHTML tables). The status is– Well no, ignore this status.

Aside. We have version 1.0 now. Ideally, you’d figure the next version would be 1.1, but it’s not going to be because NISO STS was based on ISO STS, and they already did a 1.1. So we said, “Phooey, you know. We’ll skip. Software companies do this.” You know, there’s never going to be a 10 or 11 or whatever. NISO STS 1.0 were going to be when it revs NISO STS 1.2.

Last year, I told you this was nearing completion. The committee voted the final change in October last year. Mulberry made the DTDs, the XSDs, the schemas, the tag library version 1.1. Standing Committee approval November last year. Mid-February 2022, notice we’ve changed years, the Standing Committee approved it, and it went out for public comment. And, in my cloudy crystal ball, I said, “If STS 1.2 standardization process starts in February, we should have a new STS by June.”

Yeah well, nothing has changed in the models or the documentation since February 2022. But, here’s the slide that tells a complete lie because these slides were due on the 20th of October, and this was true on the 20th of October. But on the 21st, things got better. NISO 1.2 survived public comment.

The topic commit- NISO Topic Committee approved it. The Voting Pool members approved it, and NISO was going to request ANSI approval in mid-October, which they did. And, ANSI might or might not come back, and ANSI approved in a day, and we are approved. There is going to be, as soon as and fast as we can publish it, a new STS standard. Applause to the people who made that work and made that happen. This slide is wrong. We’re going to have it.

So, what does it take to make it happen? Well, we need new DTDs, XSDs, DTDs, RNGs, tag libraries with a new publication date. A bunch of wording changes, and then we can publish it and release the non-normative materials on NISO.org.

Bruce asked me, “Could that possibly be done by XUG?”

And, at that point I said, “Bruce, we got word Friday after stoppage of work, and XUG is Wednesday. No, it can’t.”

So, I’m sorry Bruce. We didn’t make it. But, there is a Standards forum on the 14th of November, and we will be way early for that. It will be done probably next week. Yay, people.

All right, which brings-

Bruce: I believe, Debbie, the precedent is that the 1.0 DTD has an official release date of 10-31. Halloween. I believe that is now your new target for this version as well?

Debbie: Yeah, I have already- The DTDs, RNGs, and XSDs have already been done, Bruce. Although they’re not released. And yes, the official date is 10-31-2022. Halloween again! Yes! You’re right. That’s for me. Thank you.

Anyway, what’s new in STS 1.2? The same things I said last year, so I’m not going to talk about any of it. If you want to see what’s new, it’s still new, and it’s still good, and it’s here in my slides, untalked.

So, I got five minutes, just about. What is new for BITS? This is kind of exciting too. BITS 2.1 has been published. It has new non-normative materials, DTDs, RNGs, XSDs, and the same redesigned Tag Library JATS has, which STS will also have, by the way. And, that’s the part that isn’t finished yet. If you haven’t seen the redesigned Tag Libraries, they’re good.

What is new and cool in BITS?

Well, we finally added <legend>, which got requested for JATS and turned down, but STS wanted it. And anyway, this is an overview slide. Let me take you through them one at a time.

<legend> is a key or a where list or a variable list or a symbol chart. You have them for equations and figures and tables and graphics, saying, you know, “Where A is the angle of, and B is the angle of and-” Okay? They are now official- An official structure. Interestingly, the <legend> structure doesn’t have anything except an overview as unique and different. It can have titles and IDs and like that, but it then uses def lists or tables or whatever you would’ve used before. But, you now know it’s a <legend> and can associate it properly with its figure or table or whatever.

BITS has a new element, unlike anything JATS or STS has, called <content-version>. It occurred to BITS people that book versions and book editions were being treated the same thing, but they’re not. So, <content-version> is a new element that will act for books and book parts the way article version does for articles. And, there’ll be a version alternative, so you can have three versions going at once, which people do.

Ah, this is exciting, and this has already happened in JATS and already happened in STS and now it is happening in BITS. The definitions of <source> and <part-title>. <source> and <part-title> are two of the citation tags. They’re inside a bibliographic reference, and they used to say, you know, “The <source> of the book is Moby Dick, and the <part-title> is chapter three.” Okay? But, that was all defined in terms of document, and that’s not what our world is anymore. So, we just redefined these. They’re now a portion. A <part-title> is a portion of a larger resource. And, I don’t care if that’s a podcast or a module in a course or an example in a book or whatever it is. There’s something bigger around that I’m a part of. And that bigger thing is named in <source>. And this too is now the title of a resource. That’s supposed to be the title of a document? I’m sorry, but podcasts, however we view them, are not documents.

This has an interesting side effect. We’re now deprecating <chapter-title> because it’s not a <chapter-title> anymore. It’s a <part-title> for a book.

<processing-meta>, again, is something that JATS and STS have added, but BITS added it too. This is kind of a “glee bird weirdy.” It’s an element that is not part of the content or the structure or the publishing metadata of a book or a book part. What it is is information about the XML file itself at the file level.

<extended-by> and <restricted- by> are specifically so you can say, “Look, we’re following the PMC recommendation or the Inera Guild’s recommendation or the JATS4R recommendation, and here’s what that recommendation is, and here’s how it restricts us.”

You’ve also got <custom-meta> in there, which people have used to say, “Well, here’s the conversion vendor I was using, or here’s the conversion process I was using, and if I was being converted by eXtyles, here’s the version of eXtyles I was using.” Okay, you’ve got lots of metadata now that people have been wanting about the file, not about the document?

Now, there are a bunch of attributes on this that say things like, “What @tagset-family am I part of?” That makes more sense on JATS and STS, which have more than one DTD in the family. BITS is fixed to BITS.

@table-model. Are you using “xhtml” or “oasis” or “both”? Which you’re allowed to do.

@mathml, again, in JATS and STS. This is an, “Are you using 2 or are you using 3?” Well, BITS is fixed to 3.0, so no question.

And, maybe even more interesting, @math-representation. How many ways inside this document, not in our DTD, not in our whole database, am I using to make math expressions? I’m using images, I’m using LaTeX. I’m using MathML plaintext and images. Tell me in English for human readability what you’re using.

Okay, other element modifications, I’m not going there. There are some cool attributes.

I do want to talk about DTD version. It used to be fixed so you set it to whatever the latest was that you were valid to. And that was a real problem for people who had big databases, because we’ve been really careful to be backwards compatible. So, if you’re at 2.1, you are going to be valid to 2.0 as well, okay? So now, it’s a list of choices so that you’re not restricted. If you’re 1.0, you know you’re 1.0. But, if you’re 2.1, you’re 2.1 or you’re 2.0, or two however many. You’ll be valid to those, we’re being backwards compatible.

@hreflang is an attribute we borrowed from HTML. It’s not the language of the document or the language of the related thing you’re in. It’s the language of what you’re pointing to. Here it is in the context of a <related-article>. I have a <related-article> over there, and my <related-article> is in German. Implication, I’m probably not. I’m probably in English or Spanish, but the thing I’m pointing to is German. And, if you’re going to follow that link, you might want to know that.

Yeah, there are more. My request here to all of you is- There’s where you find the tag sets. Remember, XUG will have these slides up. But, comment. JATS and STS are both under NISO control, so they have a specific comment form and process, and you can put your comment in. BITS doesn’t, but it uses JATS-list, as does JATS. If you don’t have something that rises to quite the level of a formal comment, but you want to ask, ask for it there. Make a comment. How would you like this to work?

And, as when I practice, I’m two minutes over.

Liz: We build an extra time for that, Debbie.

Sylvia: Yes, we do. Thank you, Debbie. So, I’m going to just re-spotlight myself. No, apparently I’m not. Never mind. I’ll pin myself. Maybe that’ll work.

Debbie: Okay. I’m looking at one of the comments, and Robert Wheeler wants to know, “NISO STS tag library too?” Oh yeah, that’s what’s- That’s what we’re working on right now, Robert. That’s why we can’t release the DTD and the RNG and the XSD, which are done, because the tag library and the standard aren’t. But when it comes out, it will have that same brand new structure that you’ve got for BITS and JATS, and yeah. You’ll like it.

Robert: I wasn’t demanding.

Sylvia: They’re so nice and navigable. I love them. Sorry, go ahead, Robert.

Robert: I wasn’t demanding. Sylvia had shared the other two, so I was like, “Wait.”

Debbie: Wait. STS is going to have it too. Absolutely. And, they don’t yet. And, they won’t, sorry Bruce, by the end of tomorrow. But, within the next two weeks, we definitely will.

Sylvia: OK. I do not see any questions for Debbie in the chat. Does anyone have questions that they would like to ask with their voice?

Debbie: That’s what happens when you talk too fast. Everybody’s too busy scrambling to listen, and they don’t think of questions.

Jenny: Sylvia, I think I would just like- I would just like to give a shout-out for the attempt to start to wrangle <supplementary-material>. The wild west of <supplementary-material> for sure.

Debbie: Well the group got together to make proposals on it, and what they discovered is no three publishers define it the same way. So, what we’re saying now is in the days of electronic articles, you’ve all got to have it there. And, if you want to mark it as supplementary so your system can keep it out of the print, fine. But you want to be able to find that if you’re reading it online. You don’t care that those three extra tables supplementary, you want to go read them. So, put them all in the article and just say, “This is supplementary.”

Jenny: It’s nice to have a little more guidance rather than just having someone ask us, “How can we mark up <supplementary-material>?” And then, our response being, “Good question.”

Debbie: Mark it up like what it is. If it’s a section, mark it as a section. If it’s a figure, mark it as a figure. But, what that means, as Charles is thinking here, “Damn, now I’m going to have to deal with them.” Yeah, you are.

Charles: I know that there are people that are using that to get things out of the file so that they can have manageable file sizes. And, that’s actually a really good use for people who have XML that’s, like, well over four or six megabytes.

Debbie: Well sure, and if you’re something like a data set, the odds are good you’re not going to want to put that. But, we’ve got all kinds of external links for that, you know. “Would you like to see the original data? Follow this link.”

Charles: But I mean, even if- But, if it’s just text and things like that, text and tables and things that would normally be in the file- The file itself is too big to work with, and-

Debbie: Again, you’ve got nice external links for that. You know, this will work for the people where they have little three-page articles, and we add two pages by adding two more figures. It won’t work for the multi-megabyte, approaching gig range, article. And they exist. I know that. Yeah. But, we’ve got all kinds of- JATS has lovely – and BITS and STS – external linking mechanisms for that.

Charles: Well, I’ll just- There are just people I’ll have to let know like, “Oh, that’s not going to work in the future.”

Debbie: That’s not going to work for you. It’s a great idea, but it won’t work for you. Yeah. Well, the original <supplementary-material> and <inline-supplementary-material> were used for metadata anyway, and they’ll still be available. You want your metadata about that figure? Here it is. You want your metadata about that section? Here it is. And, there’s an external link. It’s over there.

Sylvia: Okay. Do we have any more questions for Debbie in the next 3 minutes?

Debbie: I think what we should do with the next 3 minutes is you should put Robert and Bruce up- Their pictures up on the stage, and I- We should all applaud like mad ’cause NISO STS is out, guys. And they’re the chairs. Robert and Bruce.

Liz: Congrats, guys.

Sylvia: Not to be confused with Robert the Bruce. Where is Bruce? There’s Bruce.

Debbie: Yes. Well, that has been hard when you go Bruce and Robert, Robert and Bruce. Oh, right. Not the… Yes. Not the historical figure, but well done, guys. Hard fought fight.

Bruce: Thank you. It was a team effort by the working group. And credit to everyone who sent in suggestions.

Debbie: Oh, yeah.

Robert W: And Mulberry.

Debbie: Thank you.

Making Auto-Redact Work for You: Better Living through Enhanced Auto-Redact

Presenter: Jenny Seifert, Inera | An Atypon Company

Gianna: All right. So, hi, I’m Gianna Flores. Some of you may know me. I’m here to have the pleasure of introducing our very own Jenny Seifert and her presentation on enhanced Auto-Redact, titled “Making Auto-Redact Work for You: Better Living through Enhanced Auto-Redact.” This is the first topic in a new Inera web series that will highlight specific eXtyles features through a deep dive on how to use them most effectively. Enjoy.

Jenny: “Making Auto-Redact Work for You Better Living through Enhanced Auto-Redact,” a tour of existing–

Jo: Sorry y’all. I’m going to start this over again.

Jenny: Hello, welcome to “Making Auto-Redact Work for You: Better Living through Enhanced Auto-Redact,” a tour of existing Auto-Redact features that you may not be aware of and a reintroduction to some of the newer features that we launched just a few years ago.

My name is Jenny Seifert. I’m director of client services at Inera and I’m happy to be doing a deep dive on Auto-Redact with you today. Just a quick administrative note that this is the first in what we hope to be a series of deeper dives into existing eXtyles features that we’ll be launching throughout 2023. So, pay attention to the newsletter for more information about that as we roll these seminars out. And if you aren’t receiving our newsletter, be sure to see Jo later on in the session.

So without further ado, I’m going to go ahead and get started.

So, Auto-Redact is a classic eXtyles feature that has been part of eXtyles almost since its inception. All eXtyles users have some form of Auto-Redact, whether it be a simple set of typographic cleanup rules that run immediately after cleanup – for example, our eXtyles JATS users – or a complex library of custom rules specifically configured for your publications. It is a context-sensitive tool, meaning that it leverages the structure applied through paragraph styles to target the application of rules to specific sections of text. This allows eXtyles to implement rules on, for example, body text, but avoid making the same changes to reference or extract paragraphs.

Further, rules can be publication specific, meaning that each of your publication’s editorial styles, if unique, can be honored. But beyond the standard editorial cleanup that Auto-Redact performs, did you know that it can add customized Word comments that can help provide additional information to the author or editor? And it can do this automatically through the process of Auto-Redact. It can add text highlighting to draw the editor or author’s eye to text that needs a more careful review or that may be too ambiguous to automate the cleanup of. It can add character styles that are not added already through an eXtyles advanced process to text that needs distinct markup in the XML.

Using Auto-Redact to add Word comments to the document is a great way to automatically communicate information to the editor or author. So, let’s start there and go into some examples. It’s particularly useful to add comments when content is too ambiguous or complex for automated cleanup. Custom text can be added to comments to communicate, for example, boilerplate author instructions or to the editor for how to proceed with editing specific text elements. If you have specific instructions that you always add to a document, those can be added through Word comments using Auto-Redact. Further, comments are easy to manipulate in Word, they can be easily skipped to via the navigation buttons on the review ribbon, replied to, and removed either manually or via eXtyles post-processing cleanup.

Specifically, Auto-Redact can be configured to add comments that include custom text, such as conversion formulas to assist the editor or the author. So for example, if the temperature in degrees Fahrenheit is encountered and the journal style requires a Celsius value, eXtyles can add a comment that includes, highlight to that problem, and then also the conversion formula that the editor can use to fix it. Comments can include text to instruct the editor or the author to reword or avoid ambiguity in contexts where automation just isn’t safe. So, for example, changing “data is” to “data are” or “analyses” to “analyzes”. These are things that might be more useful to add a comment directing the editor to take a look at the change rather than automating the change itself. Auto-Redact can perform some style validation checks by alerting when a reference heading, or maybe an acknowledgements heading, may be incorrectly styled as a heading one rather than a references or acknowledgements heading. And that could potentially cause parsing errors. So, this can be a really useful way to highlight possible style changes automatically using Auto-Redact. And comments can also include instructions on how to rephrase content because it’s not safe to automate the correction. So for example, how should and slash or be expanded? So, lots of potential for Auto-Redact’s ability to insert Word comments.

However, it’s important to be aware of comment deluge. One thing you’ll want to avoid is using Word comments too frequently or in situations where they will potentially be added numerous times throughout the document They can clutter the document and create too much noise. So, just something to be aware of when you’re getting excited about this opportunity in Auto-Redact.

So, let’s go on to another example. Auto-Redact and text highlighting. As mentioned, Auto-Redact can also apply Word text highlighting to content that you want to, similar to comments, draw the editor or the author’s eye to, or to alert the editor to content for which an automated change isn’t safe. Highlighting can be useful in places where Word comments aren’t allowed. For example, in footnote or endnote panes if your workflow retains linked notes. Highlighting color is customizable and different colors can be used to highlight different pieces of text. So for example, one color can be used to highlight data-are and one can be used to highlight the word insure. It’s not safe for us, probably, to make these changes automatically but your editor’s eye is now drawn to this text and they can make the thoughtful decision on whether or not change is required.

However, a few caveats about using Auto-Redact to apply highlighting to text. Note that highlighting can be problematic for accessibility. For example, it may be skipped by screen readers and the color may be difficult for users to see. Also, although you can search for highlighted text using Word’s Find and Replace, manually apply it from the home ribbon and automatically remove it using eXtyles post-processing cleanup, highlighting is less easy to manipulate and navigate through in the document than Word comments. Finally, depending on the shade of the highlighting that’s used it can resemble the character styles that eXtyles uses. And importantly, Word highlighting and Word character styles are not the same thing.

That said, using Auto-Redact to add highlighting to text that needs the attention of the editor or the author is a useful tool. So also, Auto-Redact can be used to apply Word character style to text that has not already been captured through an eXtyles advanced process.

Importantly, Word character styles are used by eXtyles to perform content validation, such as citation matching, and to produce granular markup in the XML. So, you really only need to consider using Auto-Redact in this way if you, for example, have specific markup requirements that aren’t being met by an eXtyles advanced process. And these situations do arise. So, for example, if your content includes redacted text that requires special markup in the XML to render it correctly, Auto-Redact can be configured to apply a character style to redactions in Word and generate the correct markup in the XML.

Likewise, if your content includes version history and it’s important to capture the date, and the version information in the XML as semantically meaningful markup, as shown here, Auto-Redact can be used to help achieve that. So, in this example, the Auto-Redact process applied the character styles to the dates and the versions that you see here. And then the export process created this semantically meaningful markup that you see in the example slide. So typically, setting up Auto-Redact to apply character styles is a very custom setup based on specific markup requirements because we generally recommend that character styles be applied by an eXtyles advanced process.

If you’re interested in this, the services team would want to discuss with you in more detail to ensure that using Auto-Redact in this way is safe and appropriate. But in instances where it is safe and appropriate, it’s a really cool way to use automation, to add this sort of detail and granularity to your content and to your XML. One final note, as requirements around accessibility evolve, we on the services team think that Auto-Redact may be able to play a role in adding richness to content.

So just a couple examples. It could be used to maybe tag a term that needs to have a pronouncing attribute in the XML, so that a screen reader can pronounce the text correctly. It could be used maybe even to check links in a document to make sure that they don’t just say “click here,” for example.

All of this is to say, watch this space. We think there’s a lot of potential opportunity to use Auto-Redact in this way.

I want to spend just a minute reminding all that the eXtyles compare to baseline feature is a very useful tool when used in conjunction with Auto-Redact. Remember that before many eXtyles processes are run, a baseline, or copy, of the file is saved in your working directory. The baseline files can be used to perform document compares in Word, which allows you to see, with track changes, the changes that were made during the eXtyles process that was just performed. A baseline file is always saved immediately before Auto-Redact is run. And by the way, we can configure eXtyles to save baseline files before most processes. So if you’re not getting a file saved before a process and you’d like there to be one, let us know and we can discuss configuring that for you. And so you can use compare to baseline to view a tracked version of the Auto-Redact changes, as shown here.

We generally recommend that new eXtyles users use Compare to Baseline regularly when becoming familiar with the types of changes that Auto-Redact makes. But it’s not a bad idea to use it periodically to proof Auto-Redact changes. Note, also, that you can compare changes at either the word or the character level.

So, this all demonstrates the versatility of Auto-Redact and the creative ways in which you can leverage eXtyles automation to add accuracy, consistency and efficiency to your editing workflow.

Even so, limitations existed with this classic Auto-Redact functionality. Auto-Redact is context specific, but only for paragraph styles, historically. Different rules could be applied to different publications, which was great, but not for different documents within those publications. Auto-Redact could enforce your organization’s editorial style, of course, but it couldn’t gracefully standardize, or sometimes even standardize at all, some complexities, such as name-date citations.

So in 2017, we announced enhanced Auto-Redact, A New Age. Two new Auto-Redact options that add increased functionality and depth to what Auto-Redact has to offer. The first enhancement is making Auto-Redact document specific. Auto-Redact can now be configured to apply distinct rules to different documents within the same publication. So if, for example, you have a publication that publishes articles in both English and French and you have different grammatical rule sets for those two languages, you could now apply those to different articles within a same publication.

Second, we introduced Late-Stage Auto-Redact. This is an Auto-Redact pass that runs in addition to standard Auto-Redact and after eXtyles advanced processes have been run, which allows it to leverage the character styles that are applied via those advanced processes, to produce more targeted, sophisticated changes. So, let’s take a look at these in a little more detail. When enhanced Auto-Redact is configured, the Auto-Redact button on the eXtyles ribbon becomes a drop-down menu that offers Auto-Redact options that can be applied on a per-document basis. For example, you may have language-specific rule sets for a publication that publishes articles or chapters in different languages, as mentioned earlier.

Or, you may have article rules that are different per document within a publication. For example, you may have different editorial rules for research articles than for blog posts, than for rapid reports and so on. You may even want an option to run a subset of editorial rules on some documents while retaining the option to run the full set of rules on others. For example, a light touch option for qualifying articles.

The bottom line is that with this enhanced Auto-Redact option, you can be much more specific in the precise rules that are run on a given document, and it may be worth revisiting your Auto-Redact setup to see where there are opportunities for a setup such as this.

This brings us to the other enhanced Auto-Redact feature that debuted in 2017, Late-Stage Auto-Redact. As mentioned, this process runs after eXtyles advanced processes, so that it can take advantage of the character styles that those processes applied. This opens the door for lots of control of cool editorial cleanup that Auto-Redact previously couldn’t safely perform and we’ll get into some examples next. Also, because this Auto-Redact pass runs later into article processing, it’s a good opportunity to configure into it some pre-export style checks to help avoid parsing errors that might occur during document export and to perform a final style check before the content moves forward in your production workflow.

So, let’s look at some examples.

Probably the most requested Late-Stage Auto-Redact setup we’ve been asked to do is the cleanup of name-date reference citations. Historically, this was something that was just not safe to clean up via Auto-Redact standard and because of the complexities of limiting the cleanup to just citations and not other texts. But because late stage Auto-Redact is run after citation matching, eXtyles can use the cite_bib character style that’s applied to the citation to perform focused changes to just citations. As you can see here. Beyond citation cleanup, Late-Stage Auto-Redact can also be used to standardize publisher names in a reference. Again, this is something that historically was not safe for Auto-Redact to clean up, but because Late-Stage is run after reference processing, changes can be focused to the text that’s styled as bib_publisher only. So, in this example, standardizing the name of the publisher, Wiley.

Also, Late-Stage Auto-Redact can be used to check the publication status of references that may have been added during the editing process. That is after reference processing has already been run. In this way, your editors can review whether additional updates need to be made before a document moves forward in the production workflow. Because Late-Stage Auto-Redact runs toward the end of the eXtyles workflow and right before document export, it can be used to identify whether some key paragraphs are styled correctly. For example, if an abstract heading has been incorrectly styled as a head 1, Late-Stage Auto-Redact can be used to add a Word comment, as we discussed previously, with text that asks the editor to review the paragraph styling to ensure it’s correct. In this way, some parsing errors can be avoided. This would apply also to elements such as reference heads, acknowledgement heads, and so on.

Also, the wording of text can be verified at this stage. For example, if your editorial style is to use the text “references cited,” rather than just “references,” Late-Stage Auto-Redact can be configured to call out when that wording might not be correct. Late-Stage Auto-Redact, just to note, is not as robust a content validation tool as say, Schematron. But it can be set to provide useful warnings in the examples shown here. For such big-ticket items as mis-styled headings.

So, how do you make Enhanced Auto-Redact work for you? So I encourage you to think of a few simple questions.

Do you have content that uses a name-date reference style?

Are there style rules you’d like validated prior to XML export?

Do your publications allow article- or document-specific style rules?

And would you like to be able to impose a subset of style rules on some documents?

If the answer to any of those questions is yes, Enhanced Auto-Redact may be for you. Reach out to us at the services team for questions or more information about how we might configure this setup for you.

So, thank you very much. I hope you learned a little bit about Auto-Redact, some existing features, and maybe some newer features. If you have any questions or want to implement this, drop us a line. Thank you very much.

Jenny: Alright, thanks everyone. Happy to answer any questions. I know Auto-Redact is kind of old hat, so–

Liz: It is, in some ways, and I have a question just for the attendees not to put people on the spot, but Jenny, I think you and I would probably agree that we’ve been really good at leveraging these new tools where we feel that they’re appropriate for new customers that we’re onboarding. And maybe one of the reasons why we wanted to talk about it today is because, maybe, we haven’t been as proactive as we could be with existing customers who are using “classic” Auto-Redact and have been for for quite some time and maybe aren’t aware of the ways in which they could be leveraging some of the new functionality. So I’m just throwing that out there.

The questions that Jenny asked at the end of her talk are things that I hope people will think about and reach out to us. I think you and I, I’m putting you on the spot, Jenny, but I think you and I would be happy to do just like have a quick conversation or do a quick audit, you know, with existing customers if they have ideas for how they can make use of some of this new functionality.

Jenny: Yeah, absolutely. And Joni can correct me if I’m wrong here, but I think the last few configurations that we’ve set up together for new customers have almost 100% included at least Late-Stage Auto-Redact. Because we just find that a really good opportunity to– or a good place to handle some of the rules that we want or are being asked to implement, right? Yeah. So, I think you’re absolutely right, Liz.

So, for those of you who have been using Auto-Redact for a long time, you know, it’s always a good idea, sometimes, to kind of blow the dust out of the corners and kind of see how you’ve been using it and are there opportunities for, you know, updates. One of the things that we always like to remind eXtyles users is if you find that you’re doing the same kind of manual change in your manuscripts over and over again that might be an opportunity for automation. And so, in those instances, we encourage you to think or reach out to us and ask and look, we’ll be honest with you. If it’s not something we feel like we can safely automate through Auto-Redact, we’ll let you know. But it’s always worth having the conversation.

Gianna: It looks like Lisandro has a question here.

Lisandro: Hi. I was just wondering about the possibility of using Late-Stage Auto-Redact to undo a word edit that you did in Auto-Redact. Is that possible?

Jenny: Interesting. Do you have a for example?

Lisandro: Yeah, for example, here in NCBI Bookshelf, we use processing instructions to give more flexibility to our providers. So, like for instance, removing the the processing instruction after adding it, like an escape character.

Jenny: Yeah, absolutely. There’s the possibility of doing that. Any time we can identify a pattern, we can probably set something up. And as you mentioned, Late-Stage is probably a great time to do that. And by the way, also for Bookshelf, and for any other user who is using an SI implementation of eXtyles, Late-Stage Auto-Redact can also be run through a manifest. So, this is absolutely something that you could put into that workflow, as well.

Lisandro: Thank you.

Jenny: Great. Anything else? Alright, well we’re a couple minutes early. Let us know if you have any questions. I don’t know that I put the support email out there, but it’s [email protected] if you wanted to send any questions or requests our way. That’s also a really great way to get a hold of us.

Gianna: Or you just want to say hi to me.

Jenny: Yeah, that’s right. Just say, say hi to Gianna.

eXtyles+: A New Partnership with J&J

Presenters: Michael Casp, Allegra Torres, and Jasmine Trinks, J&J Editorial

Liz: Okay, great. Well, welcome back everyone for the final session of the day, which is “eXtyles Plus: A new partnership with J&J” from our colleagues at J&J Editorial.

So to introduce the three speakers from J&J, we have Michael Casp, who is the director of business development. He, his background is in production workflows and peer review management, and he has worked directly with several society, commercial, and open access publishers. In his current role, he collaborates with customers to build optimized solutions to support their publishing programs.

Allegra Torres is a senior copy editor and team lead at J&J Editorial. She works on multiple copy editing and proofreading projects covering everything from medicine to engineering, but she remains a devotee of the Chicago Manual of Style. Good to know.

Liz: Jasmine Trinks is a copy editor and team lead at J&J Editorial. She works on several different copy editing, fact checking, and technical editing projects in fields ranging from oncology to physics and she lives in Chapel Hill, North Carolina. So, welcome to our guests, and thank you very much.

Michael: Thanks Liz. Hi everyone. Michael Casp here. Looking forward to talking with my colleagues Allegra and Jasmine about how J&J’s partnered with Inera to offer something a little bit different than what we’ve been able to offer before and we’re calling it eXtyles Plus.

So first, a little background on J&J Editorial. For those of you who aren’t familiar with us, pictured here are J&J’s founder, founders, Julie Nash and Jennifer Deyton. Julie and Jen started J&J in 2008 after several years previous of supporting peer-reviewed journals, and they really saw an opportunity to offer professional managing editor roles and publishing services to the journal community and took the lead in developing best practices in how journal offices interact with stakeholders and, and use peer review software. J&J found this niche and over the years we added related services for journals and books and other scholarly communications like copy editing, production system support and consulting. All right.

One year ago, Wiley acquired J&J Editorial to add a publishing services company to their stable of publishing technology offerings. Within a few weeks of the deal closing, Liz and Bruce reached out to J&J’s leadership team to set up some get to know you calls, and it was pretty quick that we, we all saw that there was a lot of value in combining Inera technology with J&J’s copy editing and production staff to offer publishers a really great product.

So, we started piloting eXtyles editorial features on an existing production project to test this hypothesis. Allegra’s going to talk a little bit about this project with ASTM later on. This new relationship with Inera has also changed the way I sell J&J services.

So, earlier this year I was talking to a customer who was looking for improvements to their production process. The algorithmic pre-editing they were getting from their vendor was not quite what they wanted and it wasn’t really as customizable as they needed it to be to be a better fit. So, you know, I thought, well, Inera can fix that. So, I pulled Liz and Bruce into the conversation and as we talked to the customer it became clear that, you know, they didn’t want to run eXtyles themselves. They didn’t have the bandwidth or, or really the interest in doing that. Their current arrangements was just to send their manuscripts off, get them back pre-edited. So, sort of on the fly we came up with this idea that, oh well we’ll do it for you. J&J staff will use eXtyles customized for your journal and we’ll run the automated pre-edit for you. Might even resolve some of the things that eXtyles flags for us so that you know, when we send the manuscript back, it’ll be cleaned up and ready to go.

They quite liked this idea and eXtyles Plus was born. So, this’ll be the Mineralogical Society example that Jasmine will talk about later on.

So, what are we really trying to address with this combination of J&J and Inera? You know, I talk to publishers all the time and I try to coalesce sort of the concerns I’m hearing into some groups here on this slide.

People are looking for speed and automation, you know. No surprise there.

People want research integrity. Which we find, you know, eXtyles reference validation definitely gives us a leg up on that.

People need process flexibility. You know? I mean, feels like we’re always in a moment. I mean, ever since I’ve entered the industry we’re always in a moment of change. So, you know, we have to be flexible and with the content types we’re working on and linking to, people need bandwidth. As things grow and change, they, the ability to add J&J’s staff alongside, you know, Inera’s technology, is really interesting for folks.

People are obviously thinking about standards and how to align with the best practices. eXtyles is great to help us, you know, get things standardized. People are, you know, concerned about accessibility, which eXtyles definitely supports by adding a lot of structure to documents. And J&J also has other accessibility features that we can add at different points of the workflow. And of course people want interoperability with their existing process. A human editor editor can flex to work in all kinds of processes and workflows in ways that, you know, a piece of software might not be able to in the timeframe they need it to.

So with that, I’ll stop and I’ll come back later to talk a little bit more, but for now I’ll pass it off to Allegra and Jasmine to talk about a couple of their projects that they’re working on right now.

Allegra: Awesome, thank you. So, hi, I’m Allegra Torres. This is my esteemed colleague, Jasmine Trinks. I, I’m on the ASTM team as previously mentioned and Jasmine’s on MSA. And the two of us have different experiences with incorporating eXtyles into our project workflow. So, hopefully we can shine some light on that. So on the ASTM team, we handle copy editing for ASTM International, which is an organization that develops and publishes technical standards for a wide variety of materials, products, so on. They have multiple journals that we handle from editorial to production. And this is an established project as Michael mentioned, that has been had a steady copy editing team for a while. eXtyles was not part of our original process. So, when we were bringing that on, it was interesting seeing how that got incorporated into our workflow. Prior to eXtyles, the copy editors did all of the manuscript cleanup and formatting manually before doing the very detailed, very thorough copy edit. Jasmine’s team on the other hand, is the more recent team to adopt eXtyles.

Jasmine: That’s true. Thanks Allegra. So I work on the MSA team, so for the Mineralogical Society of America on the journal American Mineralogist, which is the flagship journal of the Mineralogical Society and has been continuously published since 1916. The journal’s home to some of the most important advances in Earth and planetary sciences. As of right now, we’re handling a technical edit or a pre-edit for the journal, which is sent to us via eJournalPress. We run eXtyles on research articles and then send them back to the client for copy editing. And now I wanted to talk a little bit about the onboarding process with MSA, cause it’s been sort of a, an interesting experience. But we’ve all come to really value eXtyles in our workflow. So Inera sent us the first eXtyles build in July 2022, and Jenny at Inera conducted a virtual group training session with our CEs, which was really helpful, really thorough.

We ran into some challenges, one of them being that with the Wiley acquisition, we had to have some of our laptops changed out, which meant that some of our copy editors had the 64-bit version of Word instead of the 32-bit version, which meant that they could not run eXtyles. So, we had to work on that for a bit. We had to troubleshoot using eXtyles with OneDrive. We discovered that eXtyles will not run with OneDrive syncing turned on. But, after developing workarounds for these hiccups, we began working on practice papers with a lot of success. Copy editors completed several practice papers and sent them to Inera and the client for feedback. The feedback was returned from the client with several issues that needed to be addressed by both an updated eXtyles build and updated direction to the copy editors. Inera then sent a new build in late September, and we’ve been getting our copy editors acclimated to the new version, and are now working on live papers, which so far has been going really smoothly, which is great. Allegra, would you like to talk a little bit about onboarding for ASTM?

Allegra: Yes, as you know our onboarding process was similar in that we ran into most of the same challenges that the MSA team did. The 64-bit Office, the OneDrive syncing. Once we got through those, it was pretty smooth sailing from there on out. The main issue that we ran into after that, was that we started using eXtyles right around the same time that we also had a big staffing change in our team and a very big backlog of articles that we had to get through. So, this was perhaps not the ideal circumstances to learn new software, but we made it through, backlog’s clear, software’s learned. All is well. Unlike MSA also, the client was not involved with the creation of our particular build of eXtyles. Inera worked directly with our copy editing team to build something based off of the style guide that we already work from. And incorporating eXtyles into our workflow, in terms of how it affected further steps down the line, didn’t cause any issues for the typesetting process, or for our production people. So, that was very nice. As far as copy editing goes, Jasmine would you like to touch on how MSA does copy editing?

Jasmine: For sure. So we’re going to talk about a little bit about the actual copy editing requirements for these projects. To a certain extent, requirements for MSA are still evolving, because we’re in the early stages of our relationship with the client, but as of right now we’re working to ensure precise formatting of headings, tables, citations, and other parts of research articles. And accurate verification and formatting of references in the Harvard style. One unique thing about working with MSA so far, has been direct feedback from the client during setup and onboarding. So that Inera could better customize eXtyles for their needs. Specific direct feedback from the client is not always a given, as Allegra and I have talked about before, so we really appreciate that aspect of this project. An example of that is we recently got some feedback from MSA that they want post-processing cleanup run on their articles, which removes Word text shading and eXtyles tags for a cleaner look that’s closer to what the articles will look like in print. Which is for now what MSA is looking for.

For us so far the eXtyles user interface has been intuitive and easy to use. It’s easy to see which processes have been run on a file and which have not thanks to the the helpful green check marks next to each process. And so far we’ve found that eXtyles takes a lot of the grunt work out of technical editing and ensures consistent formatting. And on this slide you’ll see an example of one of our research articles before and after eXtyles processing. So this particular slide is just several paragraphs with headings, text, and reference citations and table citations. One difference is once we actually run post-processing cleanup, we won’t see the the Word text shading on those. And with the next slide we have a screenshot of the reference list before and after eXtyles processing, and as copy editors, we just really appreciate all the work that eXtyles does, especially with PubMed and Crossref and making sure all those links are correct and everything’s just where it’s supposed to be. Cause that’s a lot of work for one copy editor or two copy editors to do. Allegra, did you want to show some ASTM examples?

Allegra: I certainly do. So as you can imagine for a group that deals in technical standards, they have very detailed, very rigorous requirements for their copy editing. So, we divide our workflow basically into two parts. The first part being the technical edit, very similar to what Jasmine and the MSA team do, which is just sort of the initial intake formatting of the manuscript, checking references, checking callouts and citations. And then, once that’s complete, then we actually move on to the full copy edit. One of the fun things with this particular set of journals, is that they use both the Vancouver superscript number citation style for some journals, and then, the Harvard author-date style for another journal. So eXtyles has been fantastic for keeping track of that. When we load into, you know, start using it, you basically choose which journal it’s going to be, and from there on out eXtyles does the job of enforcing that particular reference style, which is phenomenal for just keeping people from making those kind of idle mistakes. I have seen authors use both styles in one paper and it gets very confusing at that point to remember which journal you’re working on.

So, the example that I have here is the Harvard style, the author-date. With these, because we’re also checking all of the callouts, making sure that everything has been called out accurately and correctly, having eXtyles be able to flag instances of references either being cited and not appearing in the list or appearing and not being cited is invaluable for copy editors. Particularly when we’re working on papers that have very long reference lists, which is not uncommon for engineering papers. The next slide, I believe, has our, this is the Vancouver superscript number citation style. Again, having eXtyles like do sort of the pre technical edit checks has been great for keeping track of these things. It has been pretty good for formatting references. We do have a lot of more obscure reference types. Authors like to cite a lot of unpublished conference proceedings, standards. Sometimes standards that like only exist in their country as opposed to international ones. So, some of these can’t be as easily formatted by eXtyles and can’t be verified in Crossref and PubMed, but eXtyles at least can draw attention to those, try to format it the best they can, and leaves a lot of helpful comments for things to look at when you’re working through the reference list.

And I think the last one that I have here, is we also check all of the figures and tables and eXtyles has been phenomenal for catching when the author has failed to call something out or perhaps called something out in the wrong order. And in terms of how it’s affected our overall process, it doesn’t replace the technical edit, but as Jasmine mentioned, the automation of these tasks takes a lot of the bor- kind of rote mechanical work off the copy editor’s plate, which leaves you with more energy to focus on, you know, tricky grammatical questions and finding references that only appear on the internet once and never again.

Jasmine: Never to be found. No, just kidding. One of the major benefits of eXtyles we’ve come to appreciate through this process, is its customizability and flexibility with different builds and so many different options for running eXtyles, we can successfully use it to deliver a satisfactory product to very different clients. And looking forward to our future relationship with Inera, we’re looking forward to continuing collaboration on these projects, and getting to know the software even more.

Michael: Awesome. Well thank you Allegra and Jasmine for laying that out. Yeah, we’re really excited about what eXtyles unlocks for us and our customers and you know, for both these projects we’re really just using the editorial features. We’re just starting to look at some of the, you know, XML export features as well.

And, for an example of that, we’re actually talking to a customer right now about something like this. So, for this project, this is a current J&J customer. We, do copy editing on their society news publication, which has some scholarly flavor to it, research and references, but it’s not journal content. Pieces are shorter, and they have production schedules on the order of like a couple of days rather than, you know, a few weeks or months. And in certain parts of the year, so like around society events, annual meetings, things like that, the timelines can get even more compressed as they do live event coverage. So this society publisher approached J&J to ask about some possible improvements to their process in order to simplify and streamline their workflow. And they were hoping J&J could take on some additional stages in the production process for their society news.

Key items they would need: algorithmic pre-editing with reference validation, XML conversion, and then staging the content on their publishing platform. And then doing a quality control check after that. So on the next slide, so this is their current workflow, what they’re trying to simplify. So a quick walkthrough, you know, society staff gets the manuscript, goes offshore for a algorithmic pre-edit, comes back to society staff, then they send it to us at J&J for the copy edit, we send it back to the society, who then sends it back offshore for XML conversion and then some combination of vendor, plus staff, plus I think freelancers even, then stages the content on their publishing site and then they QC it and publish. So a little complicated, I mean it’s not outlandish by scholarly publishing standards, but you know, if you’re trying to get all this done in the span of a day or two, like that’s kind of a lot. And it’s definitely a heavy burden on society staff, which is compounded by the fact that the spikes of content come in when society staff bandwidth is at its most compressed. So again, around the annual meeting. So overall this customer’s really trying to optimize for speed and low staff burden if they can. And, thanks to our relationship with Inera, J&J can offer just that.

So, the workflow we proposed for them looks a little more like this. So content comes in to society staff as before. Society sends it to J&J, where we use eXtyles to do the pre-edit, then we do our copy edit, then we convert the content to XML. Stage it on their publishing site, and run a QC checklist against the content. And then, theoretically, all the society staff would really need to do at this point, is click publish and we can do this all in a day or two, depending on what time of day the content comes in. Obviously, this is going to relieve a lot of burden on the society staff, eliminate the need for multiple vendors, and they don’t even need a separate contract with Inera as we’re rolling this all into one agreement with J&J, cause we’re going to manage the eXtyles implementation on their behalf.

We’re really hopeful this project materializes, because I think it exemplifies like the strengths of both J&J and Inera, and can hopefully lay the groundwork for sort of a, an interesting workflow model going forward. So, I obviously don’t have to convince this audience of the value of, of having the production process close to home. And I think there’s a lot of publishers out there, who have, you know, complex workflows that might not be serving their needs as well as they’d like. We’re, we’re definitely hoping to offer them a new option with eXtyles Plus, and this J&J Editorial partnership with Inera. I feel like we’re offering something a little new here, a little different, sort of a new level of editing quality mixed with speed and it’s all available at scale. So we’re, really looking forward to see where this goes. And with that, thank you. we’ll take questions.

Liz: Thank you very much all of you. Yeah, if anyone has any questions, either in the chat or feel free to unmute your microphone and chime in. I just wanted to, kind of, reiterate something that you all alluded to, which is that it’s interesting because the relationship formally is with J&J, and that does simplify things for the customer. Like you say, they don’t have to have a separate arrangement with us, but we have been working with you and the customer collaboratively, because we are all colleagues, and I think that not only facilitates the process, but I think adds that extra level of quality control and high touch to what these customers are getting. So hopefully you all have felt that that’s been a good way of moving forward with these types of projects.

Michael: Oh absolutely, yeah. No it’s great. I mean Liz and I have like a monthly call where we just talk about all these kinds of things and yeah, we’ve, we’ve definitely found that we really like the Inera people. Like I don’t know how else to say it. Like we just, we’ve really enjoyed getting to work with them and getting to know them. Cause it seems like we’re just on the same wavelength. They just come from the technology side. We come from like the people services side and we’re kind of meeting in this.

Liz: But we speak the same language for the most part. So that’s been very good.

Michael: Yeah. It helps that you used to be a copy editor. I think so.

Liz: That’s, that’s, many of us used to be copy editors, so it’s in, it’s in our blood. And I will point out that Ulysses said from Jasmine your screenshot, there’s nothing quite like the sight of a fully eXtyles-processed reference list. And I have to say, it’s been so long I’ve been working with eXtyles-processed reference lists for over 20 years, and I agree, I still get excited when I see a , a fully processed one is such a beautiful thing.

Jasmine: I think we said, some of us said in our planning meetings that running eXtyles feels like it scratches your brain in a really satisfying way, or it does for me.

Liz: Oh, that’s great.

Jasmine: It’s enjoyable. So.

Liz: Any questions?

Michael: I’ll say my– like the only downside of this is like I don’t actually get to use eXtyles. Like, I just talk about it all the time, and then they get to use it. So, so.

Liz: We can give you eXtyles if you want.

Michael: Can you? Can I get like the free lessons?

Michael: Just to, to experience this.

Liz: Just if you want to run a reference list and get that satisfying brain scratch every once in a while.

Allegra: It’s like ASMR content, like look at this reference list now that it’s been reformatted. Look how organized it is.

Jo: I love Cindy’s comment

Liz: She tells everyone, yeah, that “eXtyles is my favorite logic puzzle.” But it does most of the puzzling for you, right Cindy? Ideally.

Cindy: It does, but sometimes there’s some puzzling involved in figuring out how to tag things exactly right. When you’ve got a separate complicated document or speaking personally, eXtyles builds, so.

Liz: Or like the sort of things Joni was talking about earlier.

Cindy: Right. Yeah.

Bruce: I’ll add actually one more comment, which is over the years Liz and I have definitely run into organizations that said, you know, we love eXtyles, but we don’t want to take on running it in house. And so now what we’ve really got, is the dream team combination, thanks to the Partner Solutions structure. It has Inera and J&J working together, where Inera has always taken the position, “We do not run the software on behalf of customers,” but we can partner with J&J. J&J can be the primary point of contact with the customer, so that J&J can run it. So, it really does fill a market need for people who wanted eXtyles, but just didn’t want to run it in house. And we’re really thrilled with that.

Michael: Definitely. Yeah and we’ve, I mean I’ve, you know, been aware of eXtyles for, for many years and I, I even like toyed with the thought of trying, you know, pre-acquisition, trying to, to build a relationship here, but we never quite had the right arrangement, right project to do it with. So, so this all kind of came together really well for me personally and

Liz: And quickly!

Michael: But also professionally and quickly, yes.

Liz: We’re, we’re pretty gratified by the fact that like I said, J&J has only been part of Wiley and Partner Solutions for a year at this point and we already have multiple customers that we’re working with on joint projects. So yeah, it’s been great.

Allegra: Michael, did you orchestrate this whole acquisition just so you can have an in with Inera?

Michael: I was trying to keep that a secret, but yes, I actually bought all of scholarly publishing, including Wiley, yeah. Yeah, that’s you know, secret though.

Liz: Okay, well it looks like we’re at right about at time.