Making My Own Anime Database (part 1)

WHY?

Simply, I wanted to build a recursive web scraper/crawler and an updated anime database parsed in JSON was lacking on github. And I’m doing so! So what exactly are the steps to make your own anime database?

First off, you can’t be doing manual data entries. You need a web crawler. And I’m targeting MyAnimeList. Not in any bad sense, love the site. o.o

MAL has it’s own API but it’s terrible. You can not retrieve anime info without 3rd party APIs and wrappers. I’ve made Stats::Extract,  which extracts data from an html file so this shouldn’t be too advanced for me.

THE STEPS

  1. Make the Crawler (done!)
  2. Make the Wrapper (working on it!)
  3. Make the Scraper (not even a single line)

In this post, I’ll emphasize on

MAKING THE CRAWLER

The crawler is a script that requires an entry point, a link if you may, to the web page and then from there it searches for whatever you’re looking for. In my case, I’m targeting the Anime (will do the mangas too).

The entry point is: https://myanimelist.net

What I’m looking for: https://myanimelist.net/anime/{anime id}

So, after crawling into the entry point, it looks for anime page links and adds them to the “queue pool“. But it doesn’t end there. It does its job as a crawler and iterates through the queue pool, loading each and every page and further on adding more links extracted from those pages to the pool!  Now, this is a long process.

If you understood what I’m having it do, you might as why in the world don’t I extract the anime info using a wrapper since I’m already on it’s web page?! Well, you see. By the time I was done, I realized that the process was so slow. I’ve started researching multithreading/forking in PHP so I can utilize that on the Scraper instead. Further more, I had the scraper only go through 2000~ anime listings until I got tired of it. It proved my point, it was working. I could use it for anime that get newly added in the database or something.

I got the rest of the animes from users on MAL which had the most watched anime entries.

The crawler is completely CLI (command line interface). The Wrapper will be a PHP Library and the Scraper will be CLI too.

I’ll release the source code on github when it’s a presentable state (soon).

THE PLAN

  1. Make a basic wrapper which fetches anime information (such as name, episodes, studios, producers, ratings, date aired, genre, etc). This would be a simple wrapper for the database which doesn’t need all the information stored on MAL anime pages.
  2. Make a scraper with multithreading/forking to use the anime database of their MAL links I have right now to fetch their data and make my database.
  3. Re-write the wrapper as a complete NON-AUTHENTICATION API to fetch each and everything about anime, manga, people, character, etc. Basically a complete wrapper for the whole site. And release it on github because MAL’s own API is lackluster.
  4. Re-write the scraper with the crawler and the wrapper as it’s main components. So this time, asynchronously, the scraper will add anime links to the pool and extract the anime information on those pages directly. This could probably be the ultimate MAL Scraper.

That is, if I get it done.

Oh, and a sneak peak at the wrapper.

wrapper

 

Part 2: https://irfandahir.wordpress.com/2017/05/13/making-my-own-anime-database-part-2/

Advertisements

Project.Extract Cloud (Alpha) is live

There’s been delays but it’s here. The Alpha version of the CS2D log data extractor, Project.Extract Cloud, is up and running. There’s are some stuff left to do. I’ll explain this in a second.

Other than that you can only extract 1 file. I might as well set this as the limit. I’m gratified to be hosted for free by BroHosting as a testing for their hosting services and so far there’s absolutely no critical problems.

 

 

checkout.png

WHAT YOU SHOULD CHECK OUT!

That would be the server statistics functionality. The core of the application lies within there. Feel free to drop in whatever log of your choice and get as much as information out of it as possible!

 

TODO!

Text Searching

The text searching page right now is bare minimum, it’s simply 5% done. It’ll look more polished and organized like the ‘server statistics’ page.

User Database

User database will be a offline feature only of PE4, it’ll automatically store player information as a database for you to easily access.

Server Statistics Polishing

As complete as it looks, it’s still a bit far from done. First off, the map graph you see is a complete dummy. It’s not implemented at all. Secondly, there are some design polishing I need to do. Apart from that I want to see if I can fit in more data and graphs in there.

Usage Statistics

You’ve probably noticed a blank space in the black bar at the top after you click it. What’s meant to be stored there is a graph of your usage statistics of the browser app. The core functionality of this is complete but I’m planning to add the graphs and such at the end.

 

PRIVACY

Some of you might be wondering about the log files that you’re uploading to the server. I’ll let you know before hand that these log files are stored. The reason for this is that they’re cached incase you reload the page. An JSON format of the extracted contents are stored as well.

When I release beta, what I said will still be applicable to your offline version of Project.Extract but the cloud version won’t store anything. Nothing will be cached.

 

That’s it for now, until the beta phase.