Automating service worker cache management with sw-precache

Service worker is another great invention in the web development domain. It has the ability to intercept the request and respond to them, whether through the network or cache, helps to provide a consistent experience even when there is no connection. It is also the backbone of the Progressive Web Application promoted by Google (I do have some opinions for PWA at the end).

One of the best feature it provides for your web application is it can make your web app “offline”. Basically it can cache two kind of things: one are static files, or “required files” that your app will fail to load if without them; the other one is the dynamic content, such as an api call your users made when your app is running.

In CS3216’s assignment 3, we are required to make use of service worker and make our application “offline”. The way introduced in the assignment guide line is following the service worker lifecycle:

  1. Add files to the cache when ‘install’ event is triggered.
  2. Update the cache, such as invalidate the outdated files, at the ‘activate’ event.
  3. Serve the cached files by respond the ‘fetch’ event, in which the ‘fetch’ represents a web request.

Though this is the standard way to implement the service worker cache, developers may manually spent quite a number of time updating and debugging cached contents. One of my friends complains to me that they have to manually update all cached files in the service worker file after their web app got rebuilt and updated.

There are only two hard things in Computer Science: cache invalidation and naming things.

Phil Karlton

At the age that automation is part of the development requirement, why manage the cache manually? Google already gives us the solution: sw-precache

By using sw-precache, instead of writing the service worker module and handle events yourself, it automatically generate the service worker module that pre-cache all required static resources from the configuration for you.

If you are familiar with gulp or grunt tasks, you can build a service worker offline ready app within seconds!
Suppose you want to pre-cache all the html, css, javascript, image files when the service worker is installing, if you are doing it manually, in the install event, you will have a long list of files:


Then, you also need to care about handling the cache update/invalidation in activate event, which is still a pain in the ass.

But with sw-precache, you just need to register it as a task in your build process, and specific the staticFileGlobs, which are files that should be precache:

'staticFileGlobs': [

Build your project as normal, and the sw.js will be generated for you, with all the service worker events are already handled for you.

What’s more, you can also use sw-precache to help you cache dynamic requests. You just need to set up the runtimeCaching in the task config, and choose a url pattern and a handler for it. Again, the service worker module will be generated for you and everything works in seconds.

runtimeCaching: [{
urlPattern: /^https:\/\/example\.com\/api/,
handler: 'networkFirst'
}, {
urlPattern: /\/articles\//,
handler: 'fastest',
options: {
cache: {
maxEntries: 10,
name: 'articles-cache'

Example usage: cache all the Facebook avatar of your friends and display them using service worker.

The drawback of automation is you may lose some customization on the file generated. If you need to do some extra handling in the service worker lifecycle, you may still need to fallback and do it manually, or you can modify based on the generated file.

In conclusion, if you are using build tool in your development process, and are considering using service worker in your app, do go and give a try for sw-precache as it just works, and really can save you many headaches when dealing with service worker :)

IMAO: PWA has bright future, but we are not in the future yet

I like the idea of progressive web application. It provides a clear guideline for a future mobile web app. Google also gave us with some really interesting demo case for PWA, and install it and use it as a native app on Android is really awesome.
However, the bad news is iOS does not support service worker yet, maybe in the future, but not now. iPhone users won’t gain any benefit from the PWA because there is no way for them to experience it. This indicates that PWA already lost a large group of user on the earth. As for “Add to Home”, seriously, I don’t use it at all, and I can’t see the real usage of it as it just open a new safari window for me to display the application…
Also, from the angle of mobile application companies, do they really benefit from PWA? PWA is an enhancement from traditional mobile web app, even if it could be installed and used offline on mobile phones, it is still a web app and lacks many native features. If we just want a web app with some native taste, why not using React-native or Ionic to build hybrid app that can do more than PWAs? If you do some research on how many big names are using PWA, you will be disappointed.
This doesn’t mean we should just stand still. Service Worker is definitely something that can change the mobile web industry. What if one day iOS has fully support on service worker? Onward to a better web for everyone. If that means Progressive Web Apps, let’s do it.

CS3216 Application Critique

Group 8 brought us a very interesting app called Photomath tonight.

As the name suggested, it is an app that help you solve math problems with your phone’s camera. They have a slick UI design and a super easy to user interface. Just turn on the app, open your math textbook, point the camera to the math problem, and done.

In the presentation, there are three points group 8 made that interest me: the OCR technology behind it, the idea of adaptive learning, and the comparison with traditional calculator.

First of all, the OCR technology. The team mentioned that Photomath is a very good example usage of Optical Character Recognition (OCR), and they should make use of the technology in many more interesting areas. I totally agree with their opinion, and in fact that’s what the company behind Photomath, microblink, is doing right now. I think the most important technology involved in the math equation recognition is OCR, and microblink company actually has developed their own OCR engine, blinkOCR. Personally, I felt the performance of their recognition is pretty good. Given a not so complex math equation, they could still recognize the pattern within 1 seconds with a high accuracy, and it could recognize linear equation as well! (Be alert! MA1101R student :P). However, math is a complex thing, as there are tons of patterns and transformation. Name card, bank statement, resume, documents and ID card are much easier to work with using OCR instead. I remembered in US, there is a bank that supports depositing checks by just taking a picture of it using their banking application. It saves you a lot of trouble for walking our of your house and try finding a check deposit place (I have to take a bus to the bank in order to deposit my check in Singapore). Therefore, their OCR engine will have a large potential customer groups in financial, travel (scan ticket), and public service (filling annoying forms) sectors. However, OCR these days is already a hot thing in the industry. Universities are researching on it, big companies is working on it, how to make it stands out from the crowd? This is something the microblink must think about.

Secondly, adaptive learning. During their presentation about the app’s commercial potential and improvement, the team gave an example using User Persona and User Story (side note: good use of what we have learnt in week 3’s guest lecture!). Imagine a secondary school student, Jason, who use this app to do their homework (let’s not discuss the “cheating” act first…). After he finished the work from school, he wants to learn more but don’t know where he can find similar exercises. Here is where Photomath comes in. While he is scanning on different question, Photomath could show some suggested similar problem sets on the screen, and by testing identical concepts can reinforce Jason’s learning. That’s the key idea of adaptive learning. At first, you use it as a tool like calculator, but then, you use it as your private math coach, and a library full of problem sets. I personally like this idea very much as when I was in secondary school, I also wanted to find more questions to try out apart from my homework. But then I have to either buy more textbooks or go to math classes, which costs more money. But if I have such app, I could benefit from just taking pictures of one question and then more questions are coming up. I felt this could also be a business potential of Photomath, that they can partner and integrate their technology in many K12 education applications to help children learn better.

Last but not list, comparison with traditional calculator. In their introduction, group 8 believed that traditional calculator may be tedious, and using Photomath is a much easier and more pleasant experience. I agree. If we think about the word “calculation”, most likely things come to our mind is about tedious and messy writing on paper, or typing a long expression on your calculator or computer. However, Photomath is a totally different user experience. It saves you time and efforts to take our pen and paper, calculator and laptop. What you have to do is just take your phone and snap. That’s it. Although it is not perfect for now, it does refresh people’s opinion toward calculation. I wish it could continue its development and keep improving itself.

Imagine you are a 14-year-old kid again. You need to do this really difficult math exercise for tomorrow, but have no idea how to do it. What if you could just open an app on your phone, point your camera at your textbook, snap a picture and get the detailed instructions to solve your equation? That’s the Photomath in my mind. It conquer our nightmare when we were still kid: solving hard math problems. I guess as an CS students, Photomath may seem to be useless to us, because of its wrong recognition and cannot be applied on hard questions. But from the aspect of K12 educators and parents, this is definitely a very useful app that will help their kids and students to learn better. The technology involved is also promising, as OCR could also apply to english text, chemical elements and structure, and even…computer codes? That’s just an example usage in education, the company could easily find more to do in other industries as well.

(Try to OCR this one with your phone?)

I think the immediate improvements the company could do to make Photomath better, is to make use and incorporate more advanced technology, such as machine learning and pattern recognition, and improve the accuracy of their recognition rate. Moreover, they could also experiment more complex math by partnering up with Wolfram Alpha. They can recognize what the equation is, and Wolfram Alpha is professional in math calculation. In addition, as a tech company, they could be more active in open source industry, and attract more developers to try out their OCR SDK, so they could gain more users, and also, money :P

Meeting machine learning in CS3216 assignments

Disclaim: I am not a machine learning expert and technologies mentioned below may not be correct. Feel free to correct me :P

After taking my internship in Google this summer, I kind of became a fan of machine learning.
In our first week in Google, we took an orientation talk about how Google works, and the Googler who gave the talk keep mentioning “machine Learning”. That’s the first time I learnt about machine learning from Google, and starting from that, I realized machine learning is everywhere. Just in the company, I have intern friend whose project is to implement a machine learning algorithm so Google could hire less labors in India to do those repetitive manual works. I heard about how machine learning is used in Google Map to automate the process of creating street view. I heard about how machine learning is used in Google Photos to make sure it recognize your friends’ face. I even learnt that Google is experimenting with data center controlling by machine learning.

(Does this sound familiar?)

So I think it’s time for me to take my steps and learn some machine learning. I don’t really want to learn it as a module in NUS because I think machine learning is quite useful but I don’t really want to screw up my CAP :P (seriously sometimes learning it yourself as an interest is better than letting your test results disappoint you) That’s why I am self-learning in on Coursera now. Coursera has a very famous machine learning course taught by Stanford’s famous Prof. Andrew Ng. So far I found it is a very good course. Let’s see how my learning goes.

Let’s come back to CS3216.

In our first assignment, we are creating a web app that allow you to view all you and your friends’ past checkin locations on a map. We thought this is a very interesting ideas because visualize your checkins can easily see many many places you have been on the Earth, and you could easily compare your travels with your friends by looking at how many points you have on the map.
We also want to add some interesting statistic information such as what’s the most interesting places you and your friends have been to, or what’s the most visited places among your friends.

Then machine learning suddenly took place. when I was discussing with my teammates about what information should we put on the map, I suddenly thought about machine learning: since we have all your past travel history and destination, we can actually come up with a simple model to find the similarity between you and your friends, and recommend travel buddy from your friends. In additional, if we want to be more aggressive, we could even design your travel model and recommend next place to travel for you (and then found a start up and sell it to Grab or Uber).

I brought my crazy idea to the team, and we agree it’s an overkill for the assignment. But it is really fun and surprising to realize how useful machine learning is and how you could even use it in the assignment.

Looking at other teams’ idea, and I felt they could all apply machine learning! NUS CCA can use machine learning to recommend students to CCA groups or vice versa (given that you know what kind of CCA the students participant). Give For Free can recommend free items for their users (by knowing what kind of item you are interested, recording browsing history maybe?) Although it may not be so realistic, but I think the power of machine learning can help the web app be more closer to the users and fit their needs.

How about our assignment 2?

Well, for me, assignment 2 is all about machine learning and deep learning! Our application is Prisma, which can transform your photo into master pieace. Prisma uses a technology called Convolutional Neural Network (CNN) which is initially used for pattern recognition. By reading the paper, it could be summarize that in CNN, the style and content could be isolated, which makes it possible to use the content of a photo, and apply the style from a master piece and create a new picture. That’s the core technology that makes Prisma “Prisma”. Theoretically, this technology could also be applied on video, voices, VR, special effects… Imagine you could see the world that is in “The Starry Night” through VR headset! How cool is that?

That’s the reason why Prisma is so popular and unique after its first release. It is the first app that packs up the deep learning in your pocket, and it is a milestone showing the successful commercial usage of machine learning and deep learning running in your mobile phone. It now even supports offline filters on iPhone, which means in the future, neural network may be easily run on your phone just as your Facebook or Whatsapp.

I think this is just a beginning. In the future, we are expected to see more similar apps coming up and surprise us. Just like how Pokemon Go makes Augmented reality (AR) a new trend in gaming industry.

CS3216 week 3

In this week’s CS3216 lecture, we had two guest lecturers (and also NUS alumni), talking about two interesting topics. The first speaker, Bjorn, introduced about how to growing and promoting your products, and gave some really funny but inspiring examples he had used, such as planting Facebook Ads towards Apple employees, and use Youtube as a way for free advertisement. From the talk, I realized marketing is also an important aspect for a product’s success. Products need users. If you don’t put efforts in product promotion and marketing, it’s very hard to attract more users (unless you are Pokemon Go…).
The other speaker, Chris, guided us on how to create and validate our ideas. He walked us through some useful techniques, such as write application comparisons and user stories, and told us about the reason why we use them. I found this workshop is quite useful because by following such process, we can better define our product function and scope, and also make sure your product has business value and is technologically possible.
I guess we will benefit from those two lectures for our final projects.

Project-wise, we have started the actual coding :P Comparing to others, we started quite late (only confirmed the idea in later week 2). But I am glad to see that we already have a working front-end and a pretty solid backend. Jinghan and Nicholette are busying building our front-end application (and they did a great job!). Ryan is working with me for the database design, backend API design, and overall application management.
Someone asked us few days ago why don’t we use Node.js instead of the current Laravel framework. In fact, our team is more familiar with the Node.js. But we think as PHP is actually the most widely used language for their backend, and the PHP SDK is very mature now, we are likely to meet less issue for the API compatibility. For Laravel, it is already a mature full-functional framework that many famous websites are built upon it. In addition, we think our backend may need some computation works such as calculating Geolocation for different users. Node.js is not so good at heavy computing due to its single-thread nature, and PHP may still be a good choice. Actually, both of them works, but just pick the one that works for you best and works hard on that, right?

For the final projects, I just got an idea a few days ago. I came up with the idea when I was looking at a group of junior students looking at SoC’s ATAP project lists, and they totally don’t know which internship in the list works best for them, as the detail is super unclear and they really have no idea if the project will be fun. Then suddenly I was thinking about to create a website to let students share their past internship experience with other students. Think about you can review your internship just like reviewing an NUS module :O
You can share the interview process, the coworkers and working environment, and the internship projects. There are tons of NUS students taking different internship every year, and they must have experienced good and bad about their internships. If I was looking for an internship and if there exists such platform, then I would definitely benefit from it when I am looking for an internship. In short, this is a platform for you to provide first-hand insider internship information, share your own experience, and connect with different people and meet your fellow intern friends.
Comparing with some existing website, such as Glassdoor, this mainly focus of the website is on internships review, not on salary or full time positions. Also, it can be easily extend to not only NUS, but all institutions around the world. In addition, when we have enough users, this could also serve as an internship seeking platform where companies can post their internship information and invite students to join them.
So far, this is just an idea but I found it will be quite useful. Maybe I can apply what I learnt from Chris’s workshop today and use it to validate and improve my idea ;D

CS3216: Assignment 1 ideas

An exciting first week since the new semester starts :P. And we immediately felt the stress from our “infamous” CS3216 class. First assignment’s mid submission will due next Friday! >w<

After forming our teams, we started the brainstorm stage. The first assignment is about Facebook Application. Basically, we have to create an application that supports Facebook login, and use Facebook’s Graph API to retrieve data, and get our users interact with each others. Here are some ideas that I came up with during our discussion:

  • Pokemon Go crowd-sourcing map
    Pokemon Go is definitely the hottest topic in SG right now! It is just like suddenly everyone has something to do at night: catch Pokemons! If we want our project to be immediate popular, then something related to Pokemon Go would definitely attract some users.
    The idea comes to my mind is to build a map to show locations of different Pokemons. Users will login using their Facebook accounts, and they could mark where they catch a kind of pokemon in the map, and share the information with other users. Users can interact with each other by exchanging the pokemon information, or sharing some tips and tricks for playing the game.
    However, for this idea, it is very hard to control whether the information provided by users are authentic (if everyone claims they have catched Dragonite under the stairs, how should we verify that?). Also, this is probably against the will of the Pokemon Go company, and may violate some game agreement as well. In addition, my groupmate pointed out that Grab already built such map here. Therefore, probably no one will really use our application :(

  • “Friendship compatibility”
    This is once a popular game shared by my friends on WeChat. Firstly you will be asked to select some questions and provide your answers to them. Then your friends will take the challenge and answer your questions. For example, you may get a question like “When I am bored, I usually __“ and select a answer for that. Then if your friends know your well, they will be able to choose the correct one out of some choices. (Such as they choose “sleep” out of some other random choices). Then after answering all questions, we will provide a rate showing how well your friends know you (such as, “XXX and you are 56% compatible, please consider change a friend”).
    This idea may be fun for a while as this kind of game usually shared rapidly among friends. However, from the user perspective, first I probably only create my questions once and won’t update them after, so my friends will always answer the same question and get bored. Secondly, after I played a set of question with my friend, I may not want to try a second set as the user experience is almost the same. Another issue is this rate doesn’t really reflect your friendship (after all this is just a game without scientific!), so our game has the potential to destroy one’s friendship…

  • Travel Map
    Nowadays when we post photos and status on Facebook, we like to add a location information to it as well. Adding location information will make you feel you have really been to the place, and also show off a little to your friends (Sorry for those who are still working while I am traveling). As those location information are quite fragile, if later we want to look back on our journey, it’s very hard to find them and manage our memories around the world.
    Therefore, we are thinking about creating such travel map, and display all our check-in locations on them, so that you can write something about your memories and share them with your friends. We can make use of the Graph API provided by Facebook to collect geolocation information from your account, and use some Map SDK to display them nicely.
    Also, your friends who are using this map could also view your map and comment on your wonderful memories. Personally I like this idea very much, and I think this app will have a longer life time, and could be popular among friends.

We are still figuring out how our assignment 1 will be, but I am sure we will work it out, learn a lot of technologies, and build an awesome application!