SPARQL

There are a few tutorials out there about how to start up your own free-tier Amazon Web Services (AWS) Elastic Compute Cloud (EC2) instance and then run your own publicly available web server. I’ve planned for a while to try this with a Jena Fuseki triplestore and SPARQL endpoint, but I postponed it because I thought it might be complicated. It turned out to be pretty easy.

More Picasso paintings in one year than all the Vermeer paintings?

Answering an art history question with SPARQL.

Sometimes a question pops into my head that, although unrelated to computers, could likely be answered with a SPARQL query. I don’t necessarily know the query off the top of my head and have to work it out. I’m going to discuss an example of one that I worked out and the steps that I took, because I wanted to show how I navigated the Wikidata data model to get what I wanted.

Two recent articles describe a fascinating use of SPARQL to improve data quality in a knowledge graph at the successful grocery delivery service Instacart. On Reliability Scores for Knowledge Graphs (pdf) is a short paper submitted to the 2022 ACM Web Conference in Lyon and a longer piece on Instacart’s tech blog is titled Red Means Stop. Green Means Go: A Look into Quality Assessment in Instacart’s Knowledge Graph.

Generating websites with SPARQL and Snowman, part 2

With Rhizome's excellent ArtBase SPARQL endpoint.

In part one of this two-part series, we saw how the open source Snowman static web site generator can generate websites with data from a SPARQL endpoint. I showed how I created a sample website project with its snowman new command and then reconfigured the project to retrieve a list of artists from the Rhizome ArtBase endpoint, a repository of data about digital artworks since 1999. Here in part two I will build on that to add lists of artists’ works with links to Rhizome pages about…

Queries to explore a dataset

Even a schemaless one.

I recently worked on a project where we had a huge amount of RDF and no clue what was in there apart from what we saw by looking at random triples. I developed a few SPARQL queries to give us a better idea of the dataset’s content and structure and these queries are generic enough that I thought that they could be useful to other people.

In my last posting I described Carnegie Mellon University’s Index of Digital Humanities Conferences project, which makes over 60 years of Digital Humanities research abstracts and relevant metadata available on both the project’s website and as a file of zipped CSV that they update often. I also described how I developed scripts to convert all that CSV to some pretty nice RDF and made the scripts available on github. I finished with a promise to follow up by showing some of the…

I think that RDF has been very helpful in the field of Digital Humanities for two reasons: first, because so much of that work involves gaining insight from adding new data sources to a given collection, and second, because a large part of this data is metadata about manuscripts and other artifacts. RDF’s flexibility supports both of these very well, and several standard schemas and ontologies have matured in the Digital Humanities community to help coordinate the different data sets.