{"id":415,"date":"2023-06-23T15:44:16","date_gmt":"2023-06-23T07:44:16","guid":{"rendered":"http:\/\/blog.cyasylum.top\/?p=415"},"modified":"2023-06-23T15:45:28","modified_gmt":"2023-06-23T07:45:28","slug":"build-an-interactive-game-of-thrones-map-gafu5","status":"publish","type":"post","link":"http:\/\/blog.cyasylum.top\/index.php\/2023\/06\/23\/build-an-interactive-game-of-thrones-map-gafu5\/","title":{"rendered":"[\u8f6c\u8f7d] Build An Interactive Game of Thrones Map"},"content":{"rendered":"<h1>Build An Interactive Game of Thrones Map<\/h1>\n<p>Build An Interactive Game of Thrones Map (Part I) - Node.js, PostGIS, and Redis<\/p>\n<\/p>\n<blockquote>\n<p>\u672c\u6587\u7531 <a href=\"http:\/\/ksria.com\/simpread\/\" target=\"_blank\"  rel=\"nofollow\" >\u7b80\u60a6 SimpRead<\/a> \u8f6c\u7801\uff0c \u8f6c\u8f7d\u81ea <a href=\"https:\/\/blog.patricktriest.com\/game-of-thrones-map-node-postgres-redis\/\" target=\"_blank\"  rel=\"nofollow\" >blog.patricktriest.com<\/a><\/p>\n<\/blockquote>\n<blockquote>\n<p>A 20-minute guide to building a Node.js API to serve geospatial &quot;Game of Thrones&quot; data from PostgreSQ......<\/p>\n<\/blockquote>\n<h3>A Game of Maps<\/h3>\n<p><em>Have you ever wondered how &quot;Google Maps&quot; might be working in the background?<\/em><\/p>\n<p><em>Have you watched &quot;Game of Thrones&quot; and been confused about where all of the castles and cities are located in relation to each other?<\/em><\/p>\n<p><em>Do you not care about &quot;Game of Thrones&quot;, but still want a guide to setting up a Node.js server with PostgreSQL and Redis?<\/em><\/p>\n<p>In this 20 minute tutorial, we'll walk through building a Node.js API to serve geospatial&quot;Game of Thrones&quot; data from PostgreSQL (with the PostGIS extension) and Redis.<\/p>\n<p><a href=\"https:\/\/blog.patricktriest.com\/game-of-thrones-leaflet-webpack\/\" target=\"_blank\"  rel=\"nofollow\" >Part II<\/a> of this series provides a tutorial on building a &quot;Google Maps&quot; style web application to visualize the data from this API.<\/p>\n<p>Check out <a href=\"https:\/\/atlasofthrones.com\/\" target=\"_blank\"  rel=\"nofollow\" >https:\/\/atlasofthrones.com\/<\/a> for a preview of the final product.<\/p>\n<p>\u200b<img decoding=\"async\" src=\"https:\/\/cdn.patricktriest.com\/blog\/images\/posts\/got_map\/got_map.jpg\" alt=\"\" \/>\u200b<\/p>\n<h3>Step 0 - Setup Local Dependencies<\/h3>\n<p>Before starting, we'll need to install the project dependencies.<\/p>\n<h5>0.0 - PostgreSQL and PostGIS<\/h5>\n<p>The primary datastore for this app is <a href=\"https:\/\/www.postgresql.org\/\" target=\"_blank\"  rel=\"nofollow\" >PostgreSQL<\/a>. Postgres is a powerful and modern SQL database, and is a very solid choice for any app that requires storing and querying relational data. We'll also be using the <a href=\"http:\/\/postgis.net\/\" target=\"_blank\"  rel=\"nofollow\" >PostGIS<\/a> spatial database extender for Postgres, which will allow us to run advanced queries and operations on geographic datatypes.<\/p>\n<p>This page contains the official download and installation instructions for PostgreSQL - <a href=\"https:\/\/www.postgresql.org\/download\/\" target=\"_blank\"  rel=\"nofollow\" >https:\/\/www.postgresql.org\/download\/<\/a><\/p>\n<p>Another good resource for getting started with Postgres can be found here - <a href=\"http:\/\/postgresguide.com\/setup\/install.html\" target=\"_blank\"  rel=\"nofollow\" >http:\/\/postgresguide.com\/setup\/install.html<\/a><\/p>\n<p>If you are using a version of PostgreSQL that does not come bundled with PostGIS, you can find installation guides for PostGIS here -<br \/><a href=\"http:\/\/postgis.net\/install\/\" target=\"_blank\"  rel=\"nofollow\" >http:\/\/postgis.net\/install\/<\/a><\/p>\n<h5>0.1 - Redis<\/h5>\n<p>We'll be using <a href=\"https:\/\/redis.io\/\" target=\"_blank\"  rel=\"nofollow\" >Redis<\/a> in order to cache API responses. Redis is an in-memory key-value datastore that will enable our API to serve data with single-digit millisecond response times.<\/p>\n<p>Installation instructions for Redis can be found here - <a href=\"https:\/\/redis.io\/topics\/quickstart\" target=\"_blank\"  rel=\"nofollow\" >https:\/\/redis.io\/topics\/quickstart<\/a><\/p>\n<h5>0.2 - Node.js<\/h5>\n<p>Finally, we'll need <a href=\"https:\/\/nodejs.org\/\" target=\"_blank\"  rel=\"nofollow\" >Node.js<\/a> v7.6 or above to run our core application server and endpoint handlers, and to interface with the two datastores.<\/p>\n<p>Installation instructions for Node.js can be found here -<br \/><a href=\"https:\/\/nodejs.org\/en\/download\/\" target=\"_blank\"  rel=\"nofollow\" >https:\/\/nodejs.org\/en\/download\/<\/a><\/p>\n<h3>Step 1 - Getting Started With Postgres<\/h3>\n<h5>1.0 - Download Database Dump<\/h5>\n<p>To keep things simple, we'll be using a pre-built database dump for this project.<\/p>\n<blockquote>\n<p>The database dump contains polygons and coordinate points for locations in the &quot;Game of Thrones&quot; world, along with their text description data. The geo-data is based on multiple open source contributions, which I've cleaned and combined with text data scraped from <a href=\"http:\/\/awoiaf.westeros.org\/index.php\/Main_Page\" target=\"_blank\"  rel=\"nofollow\" >A Wiki of Ice and Fire<\/a>, <a href=\"http:\/\/gameofthrones.wikia.com\/wiki\/Game_of_Thrones_Wiki\" target=\"_blank\"  rel=\"nofollow\" >Game of Thrones Wiki<\/a>, and <a href=\"http:\/\/www.westeroscraft.com\/home\/\" target=\"_blank\"  rel=\"nofollow\" >WesterosCraft<\/a>. More detailed attribution can be found <a href=\"https:\/\/github.com\/triestpa\/Atlas-Of-Thrones\/blob\/master\/attribution.md\" target=\"_blank\"  rel=\"nofollow\" >here<\/a>.<\/p>\n<\/blockquote>\n<p>In order to load the database locally, first download the database dump.<\/p>\n<pre><code>wget https:\/\/cdn.patricktriest.com\/atlas-of-thrones\/atlas_of_thrones.sql\n<\/code><\/pre>\n<h5>1.1 - Create Postgres User<\/h5>\n<p>We'll need to create a user in the Postgres database.<\/p>\n<blockquote>\n<p>If you already have a Postgres instance with users\/roles set up, feel free to skip this step.<\/p>\n<\/blockquote>\n<p>Run <code>psql -U postgres<\/code>\u200b on the command line to enter the Postgres shell as the default <code>postgres<\/code>\u200b user. You might need to run this command as root (with <code>sudo<\/code>\u200b) or as the Postgres user in the operating system (with <code>sudo -u postgres psql<\/code>\u200b) depending on how Postgres is installed on your machine.<\/p>\n<pre><code>psql -U postgres\n<\/code><\/pre>\n<p>Next, create a new user in Postgres.<\/p>\n<pre><code>CREATE USER patrick WITH PASSWORD 'the_best_passsword';\n<\/code><\/pre>\n<p>In case it wasn't obvious, you should replace <code>patrick<\/code>\u200b and <code>the_best_passsword<\/code>\u200b in the above command with your desired username and password respectively.<\/p>\n<h5>1.2 - Create &quot;atlas_of_thrones&quot; Database<\/h5>\n<p>Next, create a new database for your project.<\/p>\n<pre><code>CREATE DATABASE atlas_of_thrones;\n<\/code><\/pre>\n<p>Grant query privileges in the new database to your newly created user.<\/p>\n<pre><code>GRANT ALL PRIVILEGES ON DATABASE atlas_of_thrones to patrick;\nGRANT SELECT ON ALL TABLES IN SCHEMA public TO patrick;\n<\/code><\/pre>\n<p>Then connect to this new database, and activate the PostGIS extension.<\/p>\n<pre><code>\\c atlas_of_thrones\nCREATE EXTENSION postgis;\n<\/code><\/pre>\n<p>Run <code>\\q<\/code>\u200b to exit the Postgres shell.<\/p>\n<h5>1.3 - Import Database Dump<\/h5>\n<p>Load the downloaded SQL dump into your newly created database.<\/p>\n<pre><code>psql -d atlas_of_thrones &lt; atlas_of_thrones.sql\n<\/code><\/pre>\n<h5>1.4 - List Databse Tables<\/h5>\n<p>If you've had no errors so far, congrats!<\/p>\n<p>Let's enter the <code>atlas_of_thrones<\/code>\u200b database from the command line.<\/p>\n<pre><code>psql -d atlas_of_thrones -U patrick\n<\/code><\/pre>\n<p>Again, substitute &quot;patrick&quot; here with your username.<\/p>\n<p>Once we're in the Postgres shell, we can get a list of available tables with the <code>\\dt<\/code>\u200b command.<\/p>\n<pre><code>\\dt\n<\/code><\/pre>\n<pre><code>List of relations\n Schema |      Name       | Type  |  Owner  \n--------+-----------------+-------+---------\n public | kingdoms        | table | patrick\n public | locations       | table | patrick\n public | spatial_ref_sys | table | patrick\n(3 rows)\n<\/code><\/pre>\n<h5>1.5 - Inspect Table Schema<\/h5>\n<p>We can inspect the schema of an individual table by running<\/p>\n<pre><code>\\d kingdoms\n<\/code><\/pre>\n<pre><code>Table &quot;public.kingdoms&quot;\n  Column   |             Type             |                        Modifiers                    \n-----------+------------------------------+---------------------------------------------------------\n gid       | integer                      | not null default nextval('political_gid_seq'::regclass)\n name      | character varying(80)        | \n claimedby | character varying(80)        | \n geog      | geography(MultiPolygon,4326) | \n summary   | text                         | \n url       | text                         | \nIndexes:\n    &quot;political_pkey&quot; PRIMARY KEY, btree (gid)\n    &quot;political_geog_idx&quot; gist (geog)\n<\/code><\/pre>\n<h5>1.6 - Query All Kingdoms<\/h5>\n<p>Now, let's get a list of all of the kingdoms, with their corresponding names, claimants, and ids.<\/p>\n<pre><code>SELECT name, claimedby, gid FROM kingdoms;\n<\/code><\/pre>\n<pre><code>name       |   claimedby   | gid \n------------------+---------------+-----\n The North        | Stark         |   5\n The Vale         | Arryn         |   8\n The Westerlands  | Lannister     |   9\n Riverlands       | Tully         |   1\n Gift             | Night's Watch |   3\n The Iron Islands | Greyjoy       |   2\n Dorne            | Martell       |   6\n Stormlands       | Baratheon     |   7\n Crownsland       | Targaryen     |  10\n The Reach        | Tyrell        |  11\n(10 rows)\n<\/code><\/pre>\n<p>Nice! If you're familiar with Game of Thrones, these names probably look familiar.<\/p>\n<h5>1.7 - Query All Location Types<\/h5>\n<p>Let's try out one more query, this time on the <code>location<\/code>\u200b table.<\/p>\n<pre><code>SELECT DISTINCT type FROM locations;\n<\/code><\/pre>\n<pre><code>type   \n----------\n Landmark\n Ruin\n Castle\n City\n Region\n Town\n(6 rows)\n<\/code><\/pre>\n<p>This query returns a list of available <code>location<\/code>\u200b entity types.<\/p>\n<p>Go ahead and exit the Postgres shell with <code>\\q<\/code>\u200b.<\/p>\n<h3>Step 2 - Setup NodeJS project<\/h3>\n<h5>2.0 - Clone Starter Repository<\/h5>\n<p>Run the following commands to clone the starter project and install the dependencies<\/p>\n<pre><code>git clone -b backend-starter https:\/\/github.com\/triestpa\/Atlas-Of-Thrones\ncd Atlas-Of-Thrones\nnpm install\n<\/code><\/pre>\n<p>The starter branch includes a base directory template, with dependencies declared in package.json. It is configured with <a href=\"https:\/\/github.com\/eslint\/eslint\" target=\"_blank\"  rel=\"nofollow\" >ESLint<\/a> and <a href=\"https:\/\/github.com\/standard\/standard\" target=\"_blank\"  rel=\"nofollow\" >JavaScript Standard Style<\/a>.<\/p>\n<blockquote>\n<p>If the lack of semicolons in this style guide makes you uncomfortable, that's fine, you're welcome to switch the project to another style in the <code>.eslintrc.js<\/code>\u200b config.<\/p>\n<\/blockquote>\n<h5>2.1 - Add .env file<\/h5>\n<p>Before starting, we'll need to add a <code>.env<\/code>\u200b file to the project root in order to provide environment variables (such as database credentials and CORs configuration) for the Node.js app to use.<\/p>\n<p>Here's a sample <code>.env<\/code>\u200b file with sensible defaults for local development.<\/p>\n<pre><code>PORT=5000\nDATABASE_URL=postgres:\/\/patrick:@localhost:5432\/atlas_of_thrones?ssl=false\nREDIS_HOST=localhost\nREDIS_PORT=6379\nCORS_ORIGIN=http:\/\/localhost:8080\n<\/code><\/pre>\n<p>You'll need to change the&quot;patrick&quot; in the DATABASE_URL entry to match your Postgres user credentials. Unless your name is Patrick, that is, in which case it might already be fine.<\/p>\n<p>A very simple <code>index.js<\/code>\u200b file with the following contents is in the project root directory.<\/p>\n<pre><code>require('dotenv').config()\nrequire('.\/server')\n<\/code><\/pre>\n<p>This will load the variables defined in <code>.env<\/code>\u200b into the process environment, and will start the app defined in the <code>server<\/code>\u200b directory. Now that everything is setup, we're (finally) ready to actually begin building our app!<\/p>\n<blockquote>\n<p>Setting authentication credentials and other environment specific configuration using ENV variables is a good, language agnostic way to handle this information. For a tutorial like this it might be considered overkill, but I've encountered quite a few production Node.js servers that are omitting these basic best practices (using hardcoded credentials checked into Git for instance). I imagine these bad practices may have been learned from tutorials which skip these important steps, so I try to focus my tutorial code on providing examples of best practices.<\/p>\n<\/blockquote>\n<h3>Step 3 - Initialize basic Koa server<\/h3>\n<p>We'll be using <a href=\"https:\/\/github.com\/koajs\/koa\" target=\"_blank\"  rel=\"nofollow\" >Koa.js<\/a> as an API framework. Koa is a sequel-of-sorts to the wildly popular <a href=\"https:\/\/github.com\/expressjs\/express\" target=\"_blank\"  rel=\"nofollow\" >Express.js<\/a>. It was built by the same team as Express, with a focus on minimalism, clean control flow, and modern conventions.<\/p>\n<h5>3.0 - Import Dependencies<\/h5>\n<p>Open <code>server\/index.js<\/code>\u200b to begin setting up our server.<\/p>\n<p>First, import the required dependencies at the top of the file.<\/p>\n<pre><code>const Koa = require('koa')\nconst cors = require('kcors')\nconst log = require('.\/logger')\nconst api = require('.\/api')\n<\/code><\/pre>\n<h5>3.1 - Initialize App<\/h5>\n<p>Next, we'll initialize our Koa app, and retrieve the API listening port and CORs settings from the local environment variables.<\/p>\n<p>Add the following (below the imports) in <code>server\/index.js<\/code>\u200b.<\/p>\n<pre><code>\/\/ Setup Koa app\nconst app = new Koa()\nconst port = process.env.PORT || 5000\n\n\/\/ Apply CORS config\nconst origin = process.env.CORS_ORIGIN | '*'\napp.use(cors({ origin }))\n<\/code><\/pre>\n<h5>3.2 - Define Default Middleware<\/h5>\n<p>Now we'll define two middleware functions with <code>app.use<\/code>\u200b. These functions will be applied to every request. The first function will log the response times, and the second will catch any errors that are thrown in the endpoint handlers.<\/p>\n<p>Add the following code to <code>server\/index.js<\/code>\u200b.<\/p>\n<pre><code>\/\/ Log all requests\napp.use(async (ctx, next) =&gt; {\n  const start = Date.now()\n  await next() \/\/ This will pause this function until the endpoint handler has resolved\n  const responseTime = Date.now() - start\n  log.info(<code>${ctx.method} ${ctx.status} ${ctx.url} - ${responseTime} ms<\/code>)\n})\n\n\/\/ Error Handler - All uncaught exceptions will percolate up to here\napp.use(async (ctx, next) =&gt; {\n  try {\n    await next()\n  } catch (err) {\n    ctx.status = err.status || 500\n    ctx.body = err.message\n    log.error(<code>Request Error ${ctx.url} - ${err.message}<\/code>)\n  }\n})\n<\/code><\/pre>\n<p>Koa makes heavy use of async\/await for handling the control flow of API request handlers. If you are unclear on how this works, I would recommend reading these resources -<\/p>\n<ul>\n<li><a href=\"https:\/\/medium.com\/ninjadevs\/node-7-6-koa-2-asynchronous-flow-control-made-right-b0d41c6ba570\" target=\"_blank\"  rel=\"nofollow\" >Node 7.6 + Koa 2: Asynchronous Flow Control Made Right<\/a><\/li>\n<li><a href=\"https:\/\/github.com\/koajs\/koa\" target=\"_blank\"  rel=\"nofollow\" >Koa Github Readme<\/a><\/li>\n<li><a href=\"https:\/\/blog.patricktriest.com\/what-is-async-await-why-should-you-care\/\" target=\"_blank\"  rel=\"nofollow\" >Async\/Await Will Make Your Code Simpler<\/a><\/li>\n<\/ul>\n<h5>3.3 - Add Logger Module<\/h5>\n<p>You might notice that we're using <code>log.info<\/code>\u200b and <code>log.error<\/code>\u200b instead of <code>console.log<\/code>\u200b in the above code. In Node.js projects, it's really best to avoid using <code>console.log<\/code>\u200b on production servers, since it makes it difficult to monitor and retain application logs. As an alternative, we'll define our own custom logging configuration using <a href=\"https:\/\/github.com\/winstonjs\/winston\" target=\"_blank\"  rel=\"nofollow\" >winston<\/a>.<\/p>\n<p>Add the following code to <code>server\/logger.js<\/code>\u200b.<\/p>\n<pre><code>const winston = require('winston')\nconst path = require('path')\n\n\/\/ Configure custom app-wide logger\nmodule.exports = new winston.Logger({\n  transports: [\n    new (winston.transports.Console)(),\n    new (winston.transports.File)({\n      name: 'info-file',\n      filename: path.resolve(__dirname, '..\/info.log'),\n      level: 'info'\n    }),\n    new (winston.transports.File)({\n      name: 'error-file',\n      filename: path.resolve(__dirname, '..\/error.log'),\n      level: 'error'\n    })\n  ]\n})\n<\/code><\/pre>\n<p>Here we're just defining a small logger module using the <code>winston<\/code>\u200b package. The configuration will forward our application logs to two locations - the command line and the log files. Having this centralized configuration will allow us to easily modify logging behavior (say, to forward logs to an ELK server) when transitioning from development to production.<\/p>\n<h5>3.4 - Define &quot;Hello World&quot; Endpoint<\/h5>\n<p>Now open up the <code>server\/api.js<\/code>\u200b file and add the following imports.<\/p>\n<pre><code>const Router = require('koa-router')\nconst database = require('.\/database')\nconst cache = require('.\/cache')\nconst joi = require('joi')\nconst validate = require('koa-joi-validate')\n<\/code><\/pre>\n<p>In this step, all we really care about is the <code>koa-router<\/code>\u200b module.<\/p>\n<p>Below the imports, initialize a new API router.<\/p>\n<pre><code>const router = new Router()\n<\/code><\/pre>\n<p>Now add a simple &quot;Hello World&quot; endpoint.<\/p>\n<pre><code>\/\/ Hello World Test Endpoint\nrouter.get('\/hello', async ctx =&gt; {\n  ctx.body = 'Hello World'\n})\n<\/code><\/pre>\n<p>Finally, export the router at the bottom of the file.<\/p>\n<pre><code>module.exports = router\n<\/code><\/pre>\n<h5>3.5 - Start Server<\/h5>\n<p>Now we can mount the endpoint route(s) and start the server.<\/p>\n<p>Add the following at the end of <code>server\/index.js<\/code>\u200b.<\/p>\n<pre><code>\/\/ Mount routes\napp.use(api.routes(), api.allowedMethods())\n\n\/\/ Start the app\napp.listen(port, () =&gt; { log.info(<code>Server listening at port ${port}<\/code>) })\n<\/code><\/pre>\n<h5>3.6 - Test The Server<\/h5>\n<p>Try starting the server with <code>npm start<\/code>\u200b. You should see the output <code>Server listening at port 5000<\/code>\u200b.<\/p>\n<p>Now try opening <code>http:\/\/localhost:5000\/hello<\/code>\u200b in your browser. You should see a &quot;Hello World&quot; message in the browser, and a request log on the command line. Great, we now have totally useless API server. Time to add some database queries.<\/p>\n<h3>Step 4 - Add Basic Postgres Integraton<\/h3>\n<h5>4.0 - Connect to Postgres<\/h5>\n<p>Now that our API server is running, we'll want to connect to our Postgres database in order to actually serve data. In the <code>server\/database.js<\/code>\u200b file, we'll add the following code to connect to our database based on the defined environment variables.<\/p>\n<pre><code>const postgres = require('pg')\nconst log = require('.\/logger')\nconst connectionString = process.env.DATABASE_URL\n\n\/\/ Initialize postgres client\nconst client = new postgres.Client({ connectionString })\n\n\/\/ Connect to the DB\nclient.connect().then(() =&gt; {\n  log.info(<code>Connected To ${client.database} at ${client.host}:${client.port}<\/code>)\n}).catch(log.error)\n<\/code><\/pre>\n<p>Try starting the server again with <code>npm start<\/code>\u200b. You should now see an additional line of output.<\/p>\n<pre><code>info: Server listening at 5000\ninfo: Connected To atlas_of_thrones at localhost:5432\n<\/code><\/pre>\n<h5>4.1 - Add Basic &quot;NOW&quot; Query<\/h5>\n<p>Now let's add a basic query test to make sure that our database and API server are communicating correctly.<\/p>\n<p>In <code>server\/database.js<\/code>\u200b, add the following code at the bottom -<\/p>\n<pre><code>module.exports = {\n  \/** Query the current time *\/\n  queryTime: async () =&gt; {\n    const result = await client.query('SELECT NOW() as now')\n    return result.rows[0]\n  }\n}\n<\/code><\/pre>\n<p>This will perform one the simplest possible queries (besides <code>SELECT 1;<\/code>\u200b) on our Postgres database: retrieving the current time.<\/p>\n<h5>4.2 - Connect Time Query To An API Route<\/h5>\n<p>In <code>server\/api.js<\/code>\u200b add the following route below our &quot;Hello World&quot; route.<\/p>\n<pre><code>\/\/ Get time from DB\nrouter.get('\/time', async ctx =&gt; {\n  const result = await database.queryTime()\n  ctx.body = result\n})\n<\/code><\/pre>\n<p>Now, we've defined a new endpoint, <code>\/time<\/code>\u200b, which will call our time Postgres query and return the result.<\/p>\n<p>Run <code>npm start<\/code>\u200b and visit <code>http:\/localhost:5000\/time<\/code>\u200b in the browser. You should see a JSON object containing the current UTC time. Ok cool, we're now serving information from Postgres over our API. The server is still a bit boring and useless though, so let's move on to the next step.<\/p>\n<h3>Step 5 - Add Geojson Endpoints<\/h3>\n<p>Our end goal is to render our &quot;Game of Thrones&quot; dataset on a map. To do so, we'll need to serve our data in a web-map friendly format: <a href=\"http:\/\/geojson.org\/\" target=\"_blank\"  rel=\"nofollow\" >GeoJSON<\/a>. GeoJSON is a JSON specification (<a href=\"https:\/\/tools.ietf.org\/html\/rfc7946\" target=\"_blank\"  rel=\"nofollow\" >RFC 7946<\/a>), which will format geographic coordinates and polygons in a way that can be natively understood by browser-based map rendering tools.<\/p>\n<blockquote>\n<p>Note - If you want to minimize payload size, you could convert the GeoJSON results to <a href=\"https:\/\/github.com\/topojson\/topojson\" target=\"_blank\"  rel=\"nofollow\" >TopoJSON<\/a>, a newer format that is able to represent shapes more efficiently by eliminating redundancy. Our GeoJSON results are not prohibitively large (around 50kb for all of the Kingdom shapes, and less than 5kb for each set of location types), so we won't bother with that in this tutorial.<\/p>\n<\/blockquote>\n<h5>5.0 - Add GeoJSON Queries<\/h5>\n<p>In the <code>server\/database.js<\/code>\u200b file, add the following functions under the <code>queryTime<\/code>\u200b function, inside the <code>module.exports<\/code>\u200b block.<\/p>\n<pre><code>\/** Query the locations as geojson, for a given type *\/\ngetLocations: async (type) =&gt; {\n  const locationQuery = `\n    SELECT ST_AsGeoJSON(geog), name, type, gid\n    FROM locations\n    WHERE UPPER(type) = UPPER($1);`\n  const result = await client.query(locationQuery, [ type ])\n  return result.rows\n},\n\n\/** Query the kingdom boundaries *\/\ngetKingdomBoundaries: async () =&gt; {\n  const boundaryQuery = `\n    SELECT ST_AsGeoJSON(geog), name, gid\n    FROM kingdoms;`\n  const result = await client.query(boundaryQuery)\n  return result.rows\n}\n<\/code><\/pre>\n<p>Here, we are using the <code>ST_AsGeoJSON<\/code>\u200b function from PostGIS in order to convert the polygons and coordinate points to browser-friendly GeoJSON. We are also retrieving the name and id for each entry.<\/p>\n<blockquote>\n<p>Note that in the location query, we are not directly appending the provided type to the query string. Instead, we're using <code>$1<\/code>\u200b as a placeholder in the query string and passing the type as a parameter to the <code>client.query<\/code>\u200b call. This is important since it will allow the Postgres to sanitize the &quot;type&quot; input and prevent SQL injection attacks.<\/p>\n<\/blockquote>\n<h5>5.1 - Add GeoJSON Endpoint<\/h5>\n<p>In the <code>server\/api.js<\/code>\u200b file, declare the following endpoints.<\/p>\n<pre><code>router.get('\/locations\/:type', async ctx =&gt; {\n  const type = ctx.params.type\n  const results = await database.getLocations(type)\n  if (results.length === 0) { ctx.throw(404) }\n\n  \/\/ Add row metadata as geojson properties\n  const locations = results.map((row) =&gt; {\n    let geojson = JSON.parse(row.st_asgeojson)\n    geojson.properties = { name: row.name, type: row.type, id: row.gid }\n    return geojson\n  })\n\n  ctx.body = locations\n})\n\n\/\/ Respond with boundary geojson for all kingdoms\nrouter.get('\/kingdoms', async ctx =&gt; {\n  const results = await database.getKingdomBoundaries()\n  if (results.length === 0) { ctx.throw(404) }\n\n  \/\/ Add row metadata as geojson properties\n  const boundaries = results.map((row) =&gt; {\n    let geojson = JSON.parse(row.st_asgeojson)\n    geojson.properties = { name: row.name, id: row.gid }\n    return geojson\n  })\n\n  ctx.body = boundaries\n})\n<\/code><\/pre>\n<p>Here, we are are executing the corresponding Postgres queries and awaiting each response. We are then mapping over each result row to add the entity metadata as GeoJSON properties.<\/p>\n<h5>5.2 - Test the GeoJSON Endpoints<\/h5>\n<p>I've deployed a very simple HTML page <a href=\"https:\/\/cdn.patricktriest.com\/atlas-of-thrones\/geojsonpreview.html\" target=\"_blank\"  rel=\"nofollow\" >here<\/a> to test out the GeoJSON responses using <a href=\"https:\/\/github.com\/Leaflet\/Leaflet\" target=\"_blank\"  rel=\"nofollow\" >Leaflet<\/a>.<\/p>\n<p>In order to provide a background for the GeoJSON data, the test page loads a sweet &quot;Game of Thrones&quot; basemap produced by <a href=\"https:\/\/carto.com\/blog\/game-of-thrones-basemap\/\" target=\"_blank\"  rel=\"nofollow\" >Carto<\/a>. This simple HTML page is also included in the starter project, in the <code>geojsonpreview<\/code>\u200b directory.<\/p>\n<p>Start the server (<code>npm start<\/code>\u200b) and open <code>http:\/\/localhost:5000\/kingdoms<\/code>\u200b in your browser to download the kingdom boundary GeoJSON. Paste the response into the textbox in the &quot;geojsonpreview&quot; web app, and you should see an outline of each kingdom. Clicking on each kingdom will reveal the geojson properties for that polygon.<\/p>\n<p>Now try the adding the GeoJSON from the location type endpoint - <code>http:\/\/localhost:5000\/locations\/castle<\/code>\u200b<\/p>\n<p>Pretty cool, huh?<\/p>\n<p>\u200b<img decoding=\"async\" src=\"https:\/\/cdn.patricktriest.com\/blog\/images\/posts\/got_map\/geojson_preview.jpg\" alt=\"\" \/>\u200b<\/p>\n<blockquote>\n<p>If your interested in learning more about rendering these GeoJSON results, be sure to check back next week for part II of this tutorial, where we'll be building out the webapp using our API - <a href=\"https:\/\/atlasofthrones.com\/\" target=\"_blank\"  rel=\"nofollow\" >https:\/\/atlasofthrones.com\/<\/a><\/p>\n<\/blockquote>\n<h3>Step 6 - Advanced PostGIS Queries<\/h3>\n<p>Now that we have a basic GeoJSON service running, let's play with some of the more interesting capabilities of PostgreSQL and PostGIS.<\/p>\n<h4>6.0 - Calculate Kingdom Sizes<\/h4>\n<p>PostGIS has a function called <code>ST_AREA<\/code>\u200b that can be used to calculate the total area covered by a polygon. Let's add a new query to calculate the total area for each kingdom of Westeros.<\/p>\n<p>Add the following function to the <code>module.exports<\/code>\u200b block in <code>server\/database.js<\/code>\u200b.<\/p>\n<pre><code>\/** Calculate the area of a given region, by id *\/\ngetRegionSize: async (id) =&gt; {\n  const sizeQuery = `\n      SELECT ST_AREA(geog) as size\n      FROM kingdoms\n      WHERE gid = $1\n      LIMIT(1);`\n  const result = await client.query(sizeQuery, [ id ])\n  return result.rows[0]\n},\n<\/code><\/pre>\n<p>Next, add an endpoint in <code>server\/api.js<\/code>\u200b to execute this query.<\/p>\n<pre><code>\/\/ Respond with calculated area of kingdom, by id\nrouter.get('\/kingdoms\/:id\/size', async ctx =&gt; {\n  const id = ctx.params.id\n  const result = await database.getRegionSize(id)\n  if (!result) { ctx.throw(404) }\n\n  \/\/ Convert response (in square meters) to square kilometers\n  const sqKm = result.size * (10 ** -6)\n  ctx.body = sqKm\n})\n<\/code><\/pre>\n<blockquote>\n<p>We know that the resulting units are in square meters because the geography data was originally loaded into Postgres using an EPSG:4326 coordinate system.<\/p>\n<\/blockquote>\n<p>While the computation is mathematically sound, we are performing this operation on a fictional landscape, so the resulting value is an estimate at best. These computations put the entire continent of Westeros at about 9.5 million square kilometers, which actually sounds about right compared to Europe, which is 10.18 million square kilometers.<\/p>\n<p>Now you can call, say, <code>http:\/\/localhost:5000\/kingdoms\/1\/size<\/code>\u200b to get the size of a kingdom (in this case &quot;The Riverlands&quot;) in square kilometers. You can refer to the table from step 1.3 to link each kingdom with their respective id.<\/p>\n<h4>6.1 - Count Castles In Each Kingdom<\/h4>\n<p>Using PostgreSQL and PostGIS, we can even perform geospatial joins on our dataset!<\/p>\n<blockquote>\n<p>In SQL terminology, a JOIN is when you combine columns from more than one table in a single result.<\/p>\n<\/blockquote>\n<p>For instance, let's create a query to count the number of castles in each kingdom. Add the following query function to our <code>server\/database.js<\/code>\u200b module.<\/p>\n<pre><code>\/** Count the number of castles in a region, by id *\/\ncountCastles: async (regionId) =&gt; {\n  const countQuery = `\n    SELECT count(*)\n    FROM kingdoms, locations\n    WHERE ST_intersects(kingdoms.geog, locations.geog)\n    AND kingdoms.gid = $1\n    AND locations.type = 'Castle';`\n  const result = await client.query(countQuery, [ regionId ])\n  return result.rows[0]\n},\n<\/code><\/pre>\n<p>Easy! Here we're using <code>ST_intersects<\/code>\u200b, a PostGIS function to find interections in the geometries. The result will be the number of locations coordinates of type <code>Castle<\/code>\u200b that intersect with the specified kingdom boundaries polygon.<\/p>\n<p>Now we can add an API endpoint to <code>\/server\/api.js<\/code>\u200b in order to return the results of this query.<\/p>\n<pre><code>\/\/ Respond with number of castle in kingdom, by id\nrouter.get('\/kingdoms\/:id\/castles', async ctx =&gt; {\n  const regionId = ctx.params.id\n  const result = await database.countCastles(regionId)\n  ctx.body = result ? result.count : ctx.throw(404)\n})\n<\/code><\/pre>\n<p>If you try out <code>http:\/\/localhost:5000\/kingdoms\/1\/castles<\/code>\u200b you should see the number of castles in the specified kingdom. In this case, it appears the &quot;The Riverlands&quot; contains eleven castles.<\/p>\n<h3>Step 7 - Input Validation<\/h3>\n<p>We've been having so much fun playing with PostGIS queries that we've forgotten an essential part of building an API - Input Validation!<\/p>\n<p>For instance, if we pass an invalid ID to our endpoint, such as <code>http:\/\/localhost:5000\/kingdoms\/gondor\/castles<\/code>\u200b, the query will reach the database before it's rejected, resulting in a thrown error and an HTTP 500 response. Not good!<\/p>\n<p>A naive approach to this issue would have us manually checking each query parameter at the beginning of each endpoint handler, but that's tedious and difficult to keep consistent across multiple endpoints, let alone across a larger team.<\/p>\n<p><a href=\"https:\/\/github.com\/hapijs\/joi\" target=\"_blank\"  rel=\"nofollow\" >Joi<\/a> is a fantastic library for validating Javascript objects. It is often paired with the <a href=\"https:\/\/github.com\/hapijs\/hapi\" target=\"_blank\"  rel=\"nofollow\" >Hapi.js<\/a> framework, since it was built by the Hapi.js team. Joi is framework agnostic, however, so we can use it in our Koa app without issue.<\/p>\n<p>We'll use the <a href=\"https:\/\/www.npmjs.com\/package\/koa-joi-validate\" target=\"_blank\"  rel=\"nofollow\" >koa-joi-validate<\/a> NPM package to generate input validation middleware.<\/p>\n<blockquote>\n<p>Disclaimer - I'm the author of <code>koa-joi-validate<\/code>\u200b. It's a very short module that was built for use in some of my own projects. If you don't trust me, feel free to just copy the code into your own project - it's only about 50 lines total, and <code>Joi<\/code>\u200b is the only dependency (<a href=\"https:\/\/github.com\/triestpa\/koa-joi-validate\/blob\/master\/index.js\" target=\"_blank\"  rel=\"nofollow\" >https:\/\/github.com\/triestpa\/koa-joi-validate\/blob\/master\/index.js<\/a>).<\/p>\n<\/blockquote>\n<p>In <code>server\/api.js<\/code>\u200b, above our API endpoint handlers, we'll define two input validation functions - one for validating IDs, and one for validating location types.<\/p>\n<pre><code>\/\/ Check that id param is valid number\nconst idValidator = validate({\n  params: { id: joi.number().min(0).max(1000).required() }\n})\n\n\/\/ Check that query param is valid location type\nconst typeValidator = validate({\n  params: { type: joi.string().valid(['castle', 'city', 'town', 'ruin', 'landmark', 'region']).required() }\n})\n<\/code><\/pre>\n<p>Now, with our validators defined, we can use them as middleware to each route in which we need to parse URL parameter input.<\/p>\n<pre><code>router.get('\/locations\/:type', typeValidator, async ctx =&gt; {\n...\n}\n\nrouter.get('\/kingdoms\/:id\/castles', idValidator, async ctx =&gt; {\n...\n}\n\nrouter.get('\/kingdoms\/:id\/size', idValidator, async ctx =&gt; {\n...\n}\n<\/code><\/pre>\n<p>Ok great, problem solved. Now if we try to pull any sneaky <code>http:\/\/localhost:5000\/locations\/;DROP%20TABLE%20LOCATIONS;<\/code>\u200b shenanigans the request will be automatically rejected with an HTTP 400 &quot;Bad Request&quot; response before it even hits our endpoint handler.<\/p>\n<h3>Step 8 - Retrieving Summary Data<\/h3>\n<p>Let's add one more set of endpoints now, to retrieve the summary data and wiki URLs for each kingdom\/location.<\/p>\n<h5>8.0 - Add Summary Postgres Queries<\/h5>\n<p>Add the following query function to the <code>module.exports<\/code>\u200b block in <code>server\/database.js<\/code>\u200b.<\/p>\n<pre><code>\/** Get the summary for a location or region, by id *\/\ngetSummary: async (table, id) =&gt; {\n  if (table !== 'kingdoms' &amp;&amp; table !== 'locations') {\n    throw new Error(<code>Invalid Table - ${table}<\/code>)\n  }\n\n  const summaryQuery = `\n      SELECT summary, url\n      FROM ${table}\n      WHERE gid = $1\n      LIMIT(1);`\n  const result = await client.query(summaryQuery, [ id ])\n  return result.rows[0]\n}\n<\/code><\/pre>\n<p>Here we're taking the table name as a function parameter, which will allow us to reuse the function for both tables. This is a bit dangerous, so we'll make sure it's an expected table name before appending it to the query string.<\/p>\n<h5>8.1 - Add Summary API Routes<\/h5>\n<p>In <code>server\/api.js<\/code>\u200b, we'll add endpoints to retrieve this summary data.<\/p>\n<pre><code>\/\/ Respond with summary of kingdom, by id\nrouter.get('\/kingdoms\/:id\/summary', idValidator, async ctx =&gt; {\n  const id = ctx.params.id\n  const result = await database.getSummary('kingdoms', id)\n  ctx.body = result || ctx.throw(404)\n})\n\n\/\/ Respond with summary of location , by id\nrouter.get('\/locations\/:id\/summary', idValidator, async ctx =&gt; {\n  const id = ctx.params.id\n  const result = await database.getSummary('locations', id)\n  ctx.body = result || ctx.throw(404)\n})\n<\/code><\/pre>\n<p>Ok cool, that was pretty straightforward.<\/p>\n<p>We can test out the new endpoints with, say, <code>localhost:5000\/locations\/1\/summary<\/code>\u200b, which should return a JSON object containing a summary string, and the URL of the wiki article that it was scraped from.<\/p>\n<h3>Step 9 - Integrate Redis<\/h3>\n<p>Now that all of the endpoints and queries are in place, we'll add a request cache using Redis to make our API super fast and efficient.<\/p>\n<h5>9.0 - Do We Actually Need Redis?<\/h5>\n<p>No, not really.<\/p>\n<p>So here's what happened - The project was originally hitting the Mediawiki APIs directly for each location summary, which was taking around 2000-3000 milliseconds per request. In order to speed up the summary endpoints, and to avoid overloading the wiki API, I added a Redis cache to the project in order to save the summary data responses after each Mediawiki api call.<\/p>\n<p>Since then, however, I've scraped all of the summary data from the wikis and added it directly to the database. Now that the summaries are stored directly in Postgres, the Redis cache is much less necessary.<\/p>\n<p>Redis is probably overkill here since we won't really be taking advantage of its ultra-fast write speeds, ACID compliance, and other useful features (like being able to set expiry dates on key entries). Additionally, Postgres has its own in-memory query cache, so using Redis won't even be <em>that<\/em> much faster.<\/p>\n<p>Despite this, we'll throw it into our project anyway since it's easy, fun, and will hopefully provide a good introduction to using Redis in a Node.js project.<\/p>\n<h5>9.1 - Add Cache Module<\/h5>\n<p>First, we'll add a new module to connect with Redis, and to define two helper middleware functions.<\/p>\n<p>Add the following code to <code>server\/cache.js<\/code>\u200b.<\/p>\n<pre><code>const Redis = require('ioredis')\nconst redis = new Redis(process.env.REDIS_PORT, process.env.REDIS_HOST)\n\nmodule.exports = {\n  \/** Koa middleware function to check cache before continuing to any endpoint handlers *\/\n  async checkResponseCache (ctx, next) {\n    const cachedResponse = await redis.get(ctx.path)\n    if (cachedResponse) { \/\/ If cache hit\n      ctx.body = JSON.parse(cachedResponse) \/\/ return the cached response\n    } else {\n      await next() \/\/ only continue if result not in cache\n    }\n  },\n  \/** Koa middleware function to insert response into cache *\/\n  async addResponseToCache (ctx, next) {\n    await next() \/\/ Wait until other handlers have finished\n    if (ctx.body &amp;&amp; ctx.status === 200) { \/\/ If request was successful\n      \/\/ Cache the response\n      await redis.set(ctx.path, JSON.stringify(ctx.body))\n    }\n  }\n}\n<\/code><\/pre>\n<p>The first middleware function (<code>checkResponseCache<\/code>\u200b) here will check the cache for the request path (<code>\/kingdoms\/5\/size<\/code>\u200b, for example) before continuing to the endpoint handler. If there is a cache hit, the cached response will be returned immediately, and the endpoint handler will not be called.<\/p>\n<p>The second middleware function (<code>addResponseToCache<\/code>\u200b) will wait until the endpoint handler has completed, and will cache the response using the request path as a key. This function will only ever be executed if the response is not yet in the cache.<\/p>\n<h5>9.2 - Apply Cache Middleware<\/h5>\n<p>At the beginning of <code>server\/api.js<\/code>\u200b, right after <code>const router = new Router()<\/code>\u200b, apply the two cache middleware functions.<\/p>\n<pre><code>\/\/ Check cache before continuing to any endpoint handlers\nrouter.use(cache.checkResponseCache)\n\n\/\/ Insert response into cache once handlers have finished\nrouter.use(cache.addResponseToCache)\n<\/code><\/pre>\n<p>That's it! Redis is now fully integrated into our app, and our response times should plunge down into the optimal 0-5 millisecond range for repeated requests.<\/p>\n<blockquote>\n<p>There's a famous adage among software engineers -&quot;There are only two hard things in Computer Science: cache invalidation and naming things.&quot;(credited to Phil Karlton). In a more advanced application, we would have to worry about cache invalidation - or selectively removing entries from the cache in order to serve updated data. Luckily for us, our API is read-only, so we never actually have to worry about updating the cache. Score! If you use this technique in an app that is not read-only, keep in mind that Redis allows you to set the expiration timeout of entries using the&quot;SETEX&quot; command.<\/p>\n<\/blockquote>\n<h5>9.3 - Redis-CLI Primer<\/h5>\n<p>We can use the redis-cli to monitor the cache status and operations.<\/p>\n<pre><code>redis-cli monitor\n<\/code><\/pre>\n<p>This command will provide a live-feed of Redis operations. If we start making requests with a clean cache, we'll initially see lots of&quot;set&quot;commands, with resources being inserted in the cache. On subsequent requests, most of the output will be&quot;get&quot; commands, since the responses will have already been cached.<\/p>\n<p>We can get a list of cache entries with the <code>--scan<\/code>\u200b flag.<\/p>\n<pre><code>redis-cli --scan | head -5\n<\/code><\/pre>\n<pre><code>\/kingdoms\/2\/summary\n\/locations\/294\/summary\n\/locations\/town\n\/kingdoms\n\/locations\/region\n<\/code><\/pre>\n<p>To directly interact with our local Redis instance, we can launch the Redis shell by running <code>redis-cli<\/code>\u200b.<\/p>\n<pre><code>redis-cli\n<\/code><\/pre>\n<p>We can use the <code>dbsize<\/code>\u200b command to check how many entries are currently cached.<\/p>\n<pre><code>127.0.0.1:6379&gt; dbsize\n<\/code><\/pre>\n<pre><code>(integer) 15\n<\/code><\/pre>\n<p>We can preview a specific cache entry with the <code>GET<\/code>\u200b command.<\/p>\n<pre><code>127.0.0.1:6379&gt; GET \/kingdoms\/2\/summary\n<\/code><\/pre>\n<pre><code>&quot;{\\&quot;summary\\&quot;:\\&quot;The Iron Islands is one of the constituent regions of the Seven Kingdoms. Until Aegons Conquest it was ruled by the Kings of the Iron ...}&quot;\n<\/code><\/pre>\n<p>Finally, if we want to completely clear the cache we can run the <code>FLUSHALL<\/code>\u200b command.<\/p>\n<pre><code>128.127.0.0.1:6379&gt; FLUSHALL\n<\/code><\/pre>\n<p>Redis is a very powerful and flexible datastore, and can be used for much, much more than basic HTTP request caching. I hope that this section has been a useful introduction to integrating Redis in a Node.js project. I would recommend that you read more about Redis if you want to learn the full extent of its capabilities - <a href=\"https:\/\/redis.io\/topics\/introduction\" target=\"_blank\"  rel=\"nofollow\" >https:\/\/redis.io\/topics\/introduction<\/a>.<\/p>\n<h3>Next up - The Map UI<\/h3>\n<p>Congrats, you've just built a highly-performant geospatial data server!<\/p>\n<p>There are lots of additions that can be made from here, the most obvious of which is building a frontend web application to display data from our API.<\/p>\n<p><a href=\"https:\/\/blog.patricktriest.com\/game-of-thrones-leaflet-webpack\/\" target=\"_blank\"  rel=\"nofollow\" >Part II<\/a> of this tutorial provides a step-by-step guide to building a fast, mobile-responsive &quot;Google Maps&quot; style UI for this data using <a href=\"https:\/\/github.com\/Leaflet\/Leaflet\" target=\"_blank\"  rel=\"nofollow\" >Leaflet.js<\/a>.<\/p>\n<p>For a preview of this end-result, check out the webapp here - <a href=\"https:\/\/atlasofthrones.com\/\" target=\"_blank\"  rel=\"nofollow\" >https:\/\/atlasofthrones.com\/<\/a><\/p>\n<p>\u200b<img decoding=\"async\" src=\"https:\/\/cdn.patricktriest.com\/blog\/images\/posts\/got_map\/got_map.jpg\" alt=\"\" \/>\u200b<\/p>\n<p>Visit the open-source Github repository to explore the complete backend and frontend codebase - <a href=\"https:\/\/github.com\/triestpa\/Atlas-Of-Thrones\" target=\"_blank\"  rel=\"nofollow\" >https:\/\/github.com\/triestpa\/Atlas-Of-Thrones<\/a><\/p>\n<p>I hope this tutorial was informative and fun! Feel free to comment below with any suggestions, criticisms, or ideas about where to take the app from here.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Build An Interactive Game of Thrones Map Build An Interactive Gam &#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"emotion":"","emotion_color":"","title_style":"","license":"","footnotes":""},"categories":[67],"tags":[48],"class_list":["post-415","post","type-post","status-publish","format-standard","hentry","category-67","tag-48"],"_links":{"self":[{"href":"http:\/\/blog.cyasylum.top\/index.php\/wp-json\/wp\/v2\/posts\/415","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/blog.cyasylum.top\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/blog.cyasylum.top\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/blog.cyasylum.top\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/blog.cyasylum.top\/index.php\/wp-json\/wp\/v2\/comments?post=415"}],"version-history":[{"count":2,"href":"http:\/\/blog.cyasylum.top\/index.php\/wp-json\/wp\/v2\/posts\/415\/revisions"}],"predecessor-version":[{"id":418,"href":"http:\/\/blog.cyasylum.top\/index.php\/wp-json\/wp\/v2\/posts\/415\/revisions\/418"}],"wp:attachment":[{"href":"http:\/\/blog.cyasylum.top\/index.php\/wp-json\/wp\/v2\/media?parent=415"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/blog.cyasylum.top\/index.php\/wp-json\/wp\/v2\/categories?post=415"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/blog.cyasylum.top\/index.php\/wp-json\/wp\/v2\/tags?post=415"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}