Jikan – The Unofficial MyAnimeList REST API

Jikan is a REST based API that fulfills the lacking requests of the official MyAnimeList API for developers. https://jikan.me


Documentation: https://jikan.me/docs

Source: https://github.com/irfan-dahir/jikan

 

Introduction

As the idea of creating my own Anime Database sparked within me, I set out to create parse data from an existing website, MyAnimeList, since I utilize it a lot for managing the content I parse through my mind. 

Read: Making My Own Anime Database – Part 1 – Making My Own Anime Database – Part 2

I was dumbfounded when I realized that the official API did not support for fetching anime or manga details. There was a way to do this via the official API but it was totally round-about. You had to use one of their API endpoints where you searched for a query and it would return a list of similar anime/manga with their details.

I could have used AniList’s API but I was already familiar with scraping data. I’ve done this before in a lot of former projects. And so I set out to develop Jikan to fulfill my parent goal; to make my own anime database. And so it took a project of it’s own.

History

Jikan was uploaded to GitHub on January the 11th with a single function of scraping anime data.

It wasn’t even called ‘Jikan’ back then, it was called the ‘Unofficial MAL API’. Quite generic, I know.

I came to terms with the name ‘Jikan’ as it was the only domain name available for the .me TLD and it’s a commonly used word in Japanese – ‘Time’. The ‘Plan A’ name was ‘Shiro’, but unfortunately everyone seemed to have hogged all the TLDs for it.

With this API, I guess you could say I’d be saving developers some … Jikan – Heh.

 


 

Enter;Jikan

Sounds like a title from the Steins;Gate multiverse.

Anyways, Jikan can provide these details from MAL simply by their ID on MAL

  • Anime
  • Manga
  • Character
  • Person

These are the implemented functions as of now. There are some further planned features.

Search Results

The official API does support this. However;

  1. The response is in XML
  2. It only shows the results of the first page

Jikan will change that by showing results for whatever page you select. And oh – it returns in JSON.

Is that it?

Mostly, yes. The reason this API was developed to provide very easy access to developers to data which isn’t supported by the official API. And there you go.

 

Get Started

So, what are you waiting for?

Head over to the documentation and get started!

https://jikan.me

Making my own Anime Database – Part 2

More than 5 months have passed since I posted about making my own Anime Database, yet it does not age. It’s time to get back. Anime Database!

Read Part 1: https://irfandahir.wordpress.com/2016/12/21/making-my-own-anime-database-part-1/

Apart from that terrible reference, it is indeed time to tell you where the anime database stands right now. But first of all, I thought it would be best to clear up what this is all really about since my former related post was just really me typing at 200wpm while breathing heavily as the idea held a cast over me.

What is this sh*t?

There’s a bunch of anime databases out there apart from MyAnimeList, such as Anime Planet, AniDB and Anime News Network to name a few. Websites like these contain anime/manga/novel entries which detail the item. It can be compared to IMDB which does the same – except for movies. Sometimes, it’s useful to integrate a RESTful API which can allow developers to fetch these item details from your databases and add them to their own applications. Because the last thing we want to do is input all the anime/manga data into our own databaes using traditional methods. Why not let the computer do it for us, amirite?

rest_api

via https://codeplanet.io/principles-good-restful-api-design/

Now, back to MyAnimeList. MAL has an API but it’s very lacking. You can’t fetch anime, manga, people or even character details directly. Furthermore, the output is in XML rather than JSON. 😦

Okay, what now?

So what do we do? We create our own. Let’s say that now we have an API that can fetch any anime or manga data via their link through means of Scraping.

Let’s talk about Scraping. Scraping is a method that fetches the web page and goes through all the nicely written /s HTML code using an algorithm that extracts the information you need from that web page. When there’s no API, this is an only solution. This or we use another service that provides an API but I really wanted to see how far I could go with this project – so why not?

What’s left?

We now have code that scrapes the web page and returns juicy data that you can cache/save/add/whatever. This requires you to provide the algorithm a link to the page you want to be scrapped, but there’s over hundreds of thousands of anime and manga out there. It would be ridiculous to leave that to human hands. This is where the Crawler comes in.

The Crawler

What a ‘Crawler’ generally does is start at some page and scans that page for other links. Those other links get saved and then it visits those links, and this recursively keeps on going and going and going.

a88

Now as the crawler is doing its job, the scraper is going through the newly cache of links that are being populated and gets the data from that. This is basically how search engines index pages.

But we’re making a really specific crawler. What I’m looking for are links to anime entries within MAL, as I mentioned before. Which falls unto this: https://myanimelist.net/anime/{anime id}

The crawler looks for links with this pattern and save them and then we have the scraper go through them and we get an indexed database!


What’s new?

Due to busy college life and other projects, I’ve been unable to pay complete attention to finish this, however as summer approaches, I find myself once again with a lot of time on my hands.

Realizing that MyAnimeList was lacking a simple API to fetch anime or manga details, I decided to create my own. I teased a few screenshots at the end of the previous related post as well. I basically decided to create an unofficial API that lets you simply do what you can’t do from the official API.

Meet ‘Jikan’ – The Unofficial MyAnimeList API

Github: https://github.com/irfan-dahir/jikan

This is the Scraper I’ve been talking about, it’s written in PHP and OOP. So far it can fetch Anime, Manga and some Character details. It’s going to be a lot more, very soon.

Hell, I even got a domain for it: http://jikan.me, although there is nothing to be seen there at the moment. For now, I plan on hosting the API there once it finishes for others to utilize as well with easy. Jikan returns data in JSON format with a simple, RESTful GET request.

It seems I’ve gotten quite side tracked. Right now I have a solid algorithm to fetch the details requires to make an Anime database. The next obvious step would be to make a robust crawler, right?

 

No.

That would double bandwidth and processing power. Each page will be required to be downloaded and scanned twice. Once for the crawler, once for the scraper. I do realize that I previously used the crawler method and got a list of quite a few anime with their details but it was not until a few days later I realized that MAL had a sitemap.

According to this and this we have two less time consuming methods. The first one is a sitemap for anime listings for crawlers/search engines. Then the second one consists of a method to download a huge list of entries using wildcards in the search. Personally, I have a terrible internet speed and wish to conclude that this works by testing my API against the data it scrapes. The sitemap goes upto 33,000 anime IDs where as the wildcard search results yields more than 107,000 anime IDs! I’ll go with the former that consists of 30~ish % of the entries.

You can also get the sitemap of manga, characters, people, news, featured articles, etc from https://myanimelist.net/sitemap/index.xml too. Pretty useful.

So we not only saved time – we’re also less prone to break MAL terms and conditions. >.>

A-Anyways. We’re down to downloading and populating our personal database.

The Process

  1. Create a links file from that XML file
  2. Write a basic script to load that file and use our API to fetch the data from those links
  3. That’s pretty much it.

1 – Making the list of links

We created a links file from the XML and ended up with 12,096 links. This pretty much shows how many anime IDs are numerically inept. entries

2 – Using our API to go through these links and scrape the data

I’ll be using the power of my shitty internet and laptop to do this, therefore no VPS will be used to induce a DoS attack through these requests.

 

 

Ofcourse, it’s not that fast. I just commented out the scraping part before running it. It will however look like that

Here’s the code that was used: https://gist.github.com/irfan-dahir/70a51ba26a03161db6d451d855944e47

 

 

3 – That’s pretty much it!

Anime details get stored in a JSON file and I’m able to load them whenever needed. There is no user interface to show it to you but I could dump the JSON to my Github once I get enough data.

 

This concludes my own Anime Database. But there’s more to it. The interest of having an offline version of an anime database led to me developing a MAL API. And there’s upcoming updates for that!

I’ll be sure to post some stats when the scraping completes.

The first quality post ever

Last year, on the 3rd of April, I lay in excite as I bragged about a design update to whatever audience I had for my portfolio that went from 💩 to something that I would call an achievement that day.

fb

I went on about how content I was with it for the time being until about a week later the excite was replaced with a pit of angst within myself as I pondered on why everything looked so wrong.

Self-learning design has always been about observing it. Letting it sink in then having the option to replicate it. You have the tools. You know how to use them. But in the end, your mind is a blank slate.

 

And with that said, let’s get into a self-analysis of how I’ve taken it a step further over the year. Below you’ll witness a murder and the subjugator.

Now that you know, let us carry on with what was truly wrong with the former design. First of all.header-wat

I don’t know what came into mind when I designed this but as far as I can recall, it looked good on paper. What I simply wanted to implement was a Call To Action button that would scroll the viewer down to my projects section.

The button looks like a drop down and that pop out make no sense.

It was late 2016 when I caught up with simple, yet effective SVGs (in detail here). My conclusion was that abrupt ends towards the end of sections, were simply too terrible, especially the way I was executing it. SVGs came along and filled in that gap.

id-header-svg.png

With a much better looking CTA (but still not the best), I managed to compliment a ‘layered’ effect of different shades. Playing around with geometry transitions has always been a favorite.

Courtesy of my previous blog: https://bootyphpandi.wordpress.com/2015/10/27/the-beginning/

Unfortunately, I don’t have the files of my former design but those triangles slid in with a 45 degree rotation. The black and cyan-ish triangles you see there are simply squares that are hidden and partially shown on hover through that 45deg tilt. The black box tilted to the right where as the bottom, cyan box tilted upwards. This gave it this sleek look.

Back to the post.

There was one more shortcoming. (pun intended)

portfolio2

Siri courtesy: https://redd.it/5eht5c

The job of the CTA was to scroll the user down 100 pixels to the designated section…

It was not until later, I realized that this was terrible and put some distance between it by bring the ‘about’ section before the portfolio. Talk about proper hierarchy.

The cringe

What you see below that paragraph of cringe is a bunch of icons of the ‘skillsets’ I have. Before this, I had bars that represented this, similar to my 2015 design:

web4

But the thing is, there’s no limits to these skills. Something new comes out every day and I was simply pulling out the experience percentage based on my then beliefs.

Instead, I’ve replaced it with something generic and descriptive. This goes below the about section.skills.png

The simple “Biography”

Speaking of the about section, here’s how it looked before.

about-old

Talk about a cringeful, long description with font size big enough for sufficient for the elders only.

I’ve changed this entirely. It’s now the first section, so it’s right below the header.header

Pretty dank, aye?

Navigation, ahoy

Before we progress, there is one more thing we need to talk about and that’s the sticky navigation bar that follows you down.

Before I had this dull piece of stick nav-old

And now, there’s thisnav-new

The portfolio’s new design is supposed to have more contrast. As you may have noticed (or not), the height of the sticky nav bar is much smaller now. It adds more breathing space to the page by tens of pixels.

There’s also a little border at the bottom to make it ‘pop-out’ with a much more elaborate shadow at the bottom than the former.

Relevant Oatmeal

make it pop!

Work of ‘The Oatmeal’ (http://theoatmeal.com/comics/design_hell)

 

The Portfolio

Let’s talk about the portfolio section. Before it was boring cards with direct links to the download or view button and the cards themselves had some design issues. I don’t have a pre-existing screenshot but they looked like the ones now except the images got squeezed or stretched.

portfolio.png

I’ve updated that part and divided the portfolio into 3 categories, Client, Designs & Apps.

Taking guess would be enough to realize that the client category is for client work, the design category is for web designs I make myself, be it free or premium. And Apps are app websites I’ve developed.

But that’s not the best part of the portfolio, you may see a familiar design on hover here lacking the download and view buttons. That’s because it’s begging to be clicked.portfolio-hoverIf it’s downloadable, a small download button appears as well. Now once you click on the card, it fetches the details for the item via AJAX and using Bootstrap Modals, we get this beauty.

portfolio-modal.png

portfolio-modal2.png

I can safely say that this is probably the first modal I’ve designed, hence some design issues here too. I need to rework the bottom part, not sure at the moment how though. But this is what I’ve got. It’s quick and simple. Bootstrap Modals are a bonus with User Experience. Click on the X or behind the modal and you’re back to the portfolio.

 

The Bottom

The contact section may look nearly identical to the former but rest assured it’s been revamped from a user experience perspective. Instead of the good old page reload for the submission process, it now utilizes the power of AJAX for a synchronous update on the spot. It’s acquainted with Sweet Alerts and Google’s reCAPTCHA which provides some mercy on the database.

There’s 860 messages of which 4 are actual messages and the rest of it are spam. My website is quite popular with the bots – heh.

ezgif.com-resize.gif

Messages from Bots

Nevertheless, here’s how fast and simple it is now for anyone to drop in a message.

contact gif

As for the footer, I’ve removed the bulky useless section it had before and replaced it with something simple.

footer.png

You may have realized, I removed the Twitter Feed! Actually, I had plans for it. Right now I’m using free hosting from 000webhost. It disallows the use of REST APIs and that sucks. I actually have the whole Twitter API, cache, etc ready to roll out. All it’s lacking is the design. I thought I’d put it in the footer but that’s just meh. I thought I should promote my blog posts on my portfolio as well so I’ve been thinking of making an entire new section dedicated to a twitter feed (“recent rambling”) and blog posts (“recent posts”). I haven’t designed anything yet as they both use REST APIs and until I move to a proper limitless domain, I won’t be updating on this.

 

Enough of the design, Let’s talk about the inside.

It’s what’s on the inside that counts.

I’ve re-coded everything in PHP, still not following the MVC structure but rather my own structure, which is truly odd to explain.

I’ve moved from using CSS to LESS. LESS is basically better CSS syntax. Next up, I’ve started using Bootstrap as the front end framework. As much as I promised to use only made-from-scratch stuff, I’ve really been slowing myself down. Bootstrap has built in modals, grids, etc which really put off workload. Both of these combined proved a faster and easier workflow. I’ve really cut in half the time it takes to code a design.

 

What now?

I’m still not content with how it looks. I’ve still yet to implement the section that consists of my blog and twitter posts. Also, the website lacks responsiveness (not adapted for mobile or tablets). I’m too lazy to add it now but I’ll probably ninja add it later on.

One more thing!

Branding. So far I’ve not used any logos to represent myself and really needed a favicon (that tiny icon you see on your tab before the web page’s title). So I followed my life motto, “Why not?”, and thus utilized my expert GIMP skills (2poor4photoshop) and came up with the following.

logo.png

 

As you can see, it looks terrible at the edges, but that’s not going to show in a small logo or favicon and I’m too lazy to perfect the small details it at the moment. Maybe later on?

This concludes the berating of my former portfolio and what I did to upgrade it.lolwut

ayylmao

 

Making My Own Anime Database (part 1)

WHY?

Simply, I wanted to build a recursive web scraper/crawler and an updated anime database parsed in JSON was lacking on github. And I’m doing so! So what exactly are the steps to make your own anime database?

First off, you can’t be doing manual data entries. You need a web crawler. And I’m targeting MyAnimeList. Not in any bad sense, love the site. o.o

MAL has it’s own API but it’s terrible. You can not retrieve anime info without 3rd party APIs and wrappers. I’ve made Stats::Extract,  which extracts data from an html file so this shouldn’t be too advanced for me.

THE STEPS

  1. Make the Crawler (done!)
  2. Make the Wrapper (working on it!)
  3. Make the Scraper (not even a single line)

In this post, I’ll emphasize on

MAKING THE CRAWLER

The crawler is a script that requires an entry point, a link if you may, to the web page and then from there it searches for whatever you’re looking for. In my case, I’m targeting the Anime (will do the mangas too).

The entry point is: https://myanimelist.net

What I’m looking for: https://myanimelist.net/anime/{anime id}

So, after crawling into the entry point, it looks for anime page links and adds them to the “queue pool“. But it doesn’t end there. It does its job as a crawler and iterates through the queue pool, loading each and every page and further on adding more links extracted from those pages to the pool!  Now, this is a long process.

If you understood what I’m having it do, you might as why in the world don’t I extract the anime info using a wrapper since I’m already on it’s web page?! Well, you see. By the time I was done, I realized that the process was so slow. I’ve started researching multithreading/forking in PHP so I can utilize that on the Scraper instead. Further more, I had the scraper only go through 2000~ anime listings until I got tired of it. It proved my point, it was working. I could use it for anime that get newly added in the database or something.

I got the rest of the animes from users on MAL which had the most watched anime entries.

The crawler is completely CLI (command line interface). The Wrapper will be a PHP Library and the Scraper will be CLI too.

I’ll release the source code on github when it’s a presentable state (soon).

THE PLAN

  1. Make a basic wrapper which fetches anime information (such as name, episodes, studios, producers, ratings, date aired, genre, etc). This would be a simple wrapper for the database which doesn’t need all the information stored on MAL anime pages.
  2. Make a scraper with multithreading/forking to use the anime database of their MAL links I have right now to fetch their data and make my database.
  3. Re-write the wrapper as a complete NON-AUTHENTICATION API to fetch each and everything about anime, manga, people, character, etc. Basically a complete wrapper for the whole site. And release it on github because MAL’s own API is lackluster.
  4. Re-write the scraper with the crawler and the wrapper as it’s main components. So this time, asynchronously, the scraper will add anime links to the pool and extract the anime information on those pages directly. This could probably be the ultimate MAL Scraper.

That is, if I get it done.

Oh, and a sneak peak at the wrapper.

wrapper

 

Part 2: https://irfandahir.wordpress.com/2017/05/13/making-my-own-anime-database-part-2/

Project.Extract Cloud (Alpha) is live

There’s been delays but it’s here. The Alpha version of the CS2D log data extractor, Project.Extract Cloud, is up and running. There’s are some stuff left to do. I’ll explain this in a second.

Other than that you can only extract 1 file. I might as well set this as the limit. I’m gratified to be hosted for free by BroHosting as a testing for their hosting services and so far there’s absolutely no critical problems.

 

 

checkout.png

WHAT YOU SHOULD CHECK OUT!

That would be the server statistics functionality. The core of the application lies within there. Feel free to drop in whatever log of your choice and get as much as information out of it as possible!

 

TODO!

Text Searching

The text searching page right now is bare minimum, it’s simply 5% done. It’ll look more polished and organized like the ‘server statistics’ page.

User Database

User database will be a offline feature only of PE4, it’ll automatically store player information as a database for you to easily access.

Server Statistics Polishing

As complete as it looks, it’s still a bit far from done. First off, the map graph you see is a complete dummy. It’s not implemented at all. Secondly, there are some design polishing I need to do. Apart from that I want to see if I can fit in more data and graphs in there.

Usage Statistics

You’ve probably noticed a blank space in the black bar at the top after you click it. What’s meant to be stored there is a graph of your usage statistics of the browser app. The core functionality of this is complete but I’m planning to add the graphs and such at the end.

 

PRIVACY

Some of you might be wondering about the log files that you’re uploading to the server. I’ll let you know before hand that these log files are stored. The reason for this is that they’re cached incase you reload the page. An JSON format of the extracted contents are stored as well.

When I release beta, what I said will still be applicable to your offline version of Project.Extract but the cloud version won’t store anything. Nothing will be cached.

 

That’s it for now, until the beta phase.

The Official Follow Up

So, I’m back with another blog. I do realise that the previous one (bootyphpandi.wordpress.com) was lacking a decent name and so I took it upon myself (again) to bring it to a professional state. I had plans of making my own CMS but again I realised that I’d have to hit up AngularJs and some more alpha type stuff to make it look like a decent CMS. Plus due to the lack of time, I’ll be using this as my official blog.

Shoutout to Tonal theme as I really love this minimalistic freebie.

Some Updates

Portfolio Polishing

I updated my own portfolio (irfandahir.com). The design was left unfinished so I polished it out a bit after recieving some insight and critique from forum boards. I still feel it’s lacking so I’m devising plans to make it look nicer.

 

Omilos by  id

Introducing Omilos

I’ve made another freebie, Omilos. I thought “Omilos” was greek for “something big” but my greek buddy corrected me once more. Nevertheless, the design is upto the level of being used. IMO it is lacking some design fundamentals and has some flaws but it will get your job done as it’s coded as cleanly possible. If you hire a designer or have some coding skills, you’ll be able to adjust it to your needs.

Demo | Download

 

Project Extract Cloud

Project.Extract 4

If you’re a CS2D player then you might know what this is, if not then here’s a breif explaination. Project.Extract (including legacy versions 1, 2 & 3) have been downloadable apps which run through your browser with the dependacy of Apache & PHP (WAMP, LAMP). CS2D generates a fair amount of logs files and so I created a PHP Library which would extract useful amounts of information from these logs. And Project.Extract is the visual version of that. The Legacy versions 1-3 only extracted user information and had text based searching.

So a year later, after leveling up multiple times in PHP I realised I could extract so much more. I’ve developed a PHP Library, Log Miner, for it which acts as a core for Project.Extract 4. Both the PHP Library and PE4 are in works. The difference between legacy versions and this is that this has the capability to extract ALOT more from your logs. Every single detail. And the awesome part? It’s both a web based app if you don’t know how to set it up and downloadable which removes limits. I’ll talk more about it once I’m ready to deploy it.

If you’re interested then these are repos you should keep an eye on.

[REPO][PHP Library] Log Miner

[REPO] Project.Extract 4

 

That’s it for now.