On Wednesday, three of us began their journey at our office in Münster. First, we took a train to Berlin where Felix currently lives, studies and works for Zweitag as a remote Software Engineer. Arriving in Berlin, we started with a nice dinner and a very short night in Berlin before we flew to HEL(L).
Before Junction 2018 started, we attended the Hack Talks on Thursday in the Kulttuuritalo organized by Junction and HelTech. The one-day-conference consisted of four workshops in the tracks Cloud and AI and keynotes by six well-known speakers
Especially the keynotes by Michael Slater about Facebook’s Spark AR Studio and Jaakko Lehtinen from NVIDIA about Noise2Noise were very memorable. Spark AR Studio is a handy tool to create augmented reality applications and Noise2Noise is a software which restores corrupted images by only learning with corrupted examples.
The keynotes closed with a talk by Mikko Hyppönen, who is the Chief Research Officer at F-Secure. His entertaining talk “Cyber Arms Race” gave an insight into actors and intentions of current cyber attacks.
On Friday, Junction 2018 finally started. After arriving at the Campus of Aalto-University we checked in and had some time left to discover the area until we headed for the Dipoli building where the hackathon took place.
We chose the “Towards a new generation of guidance” challenge from the city of Tampere in our “future cities” track. Our problem set was detecting free parking spots in real time and guiding citizens and visitors towards those. Some challenging constraints were that no sensors could be installed on the street and the solution was supposed to be very cost efficient. After an initial brainstorming of two hours, we interviewed the partners from Tampere and validated our first draft. Our solution was to attach cameras to the smart lamp posts of the city. These cameras would periodically take pictures of the streets and detect free spots using object detection. Shortly after that, the last member of our team arrived: Josef. He is currently studying in Madrid and due to that also working remote.
Now that we knew what to hack on this weekend, we split up and started working on the tasks. First, we needed to build a platform to save the information where the free spots are. Second, we needed a solution to detect free spots in an image and extract the information which specific spot it is. And last, we needed some kind of guidance system for the users.
Johannes and Josef worked on the platform of our system. We found a GeoJSON file in the Open Data Archives from Tampere which shows all available parking spots in the city. These parking spots were loaded into our database. Afterwards, a map showing all spots and their occupation status was built. Free spots where indicated by displaying the spot green and red for occupied. Additionally, Johannes and Josef built a small API to set the current occupation status of a parking spot and requesting the location of a free one.
Manh Tin and Konstantin worked on the image recognition part of our project. After trying around with different libraries we chose the object detection of the python library ImageAI for our purposes. Each detected car would have a box, we had to check against the predefined boxes of the parking spots for intersections.
To test the image recognition system and also to have a more realistic demo, we wanted to show real cars. We bought a car magazine and cut out photos of cars as our mock cars. Then we drew a street on a large piece of paper and added eight parking spots. In the end, we mounted a webcam to a tower of aluminum beverage cans. Now we could place a car on our street, wait for our script to capture an image and run the object detection. The script then sent out a POST-request to our platform to set the correct occupation status.
Just recognizing the free spot wasn’t enough for us, we also wanted to create a smart solution to guide people to the free spots. To improve the usability of our idea, Felix created an “Action on Google”. By saying “Hey Google, ask Awesome Parking for a parking spot” the phone opened Google Maps and started navigating to a free spot. The endpoint for the action on Google was defined on our platform and returned a free parking spot from the database.
After the Hack-deadline on Sunday at 12pm, the Junction team planned for us hackers a Peer Reviewing: They had built a small application, guiding us to different teams. The voting itself was conducted on a comparative basis. This means, you could always decide which team was better: the current one or the previous one. Although some of us were almost only guided to teams that worked on the track “game jam”, it was still very interesting to see what others had built in the same short amount of time. To give everyone a chance of having a glimpse of others’ projects, we took turns: While two of us were always at our table to present our solution and show our demo, the other three were exploring the rest of the projects.
Many teams had developed some sort of mini-games, some projects had been realized with SparkAR. Their applications and ideas allowed you to create music with your mimics or “catch” several creatures that appeared at specific beacons, for instance. Another team decided to go for speech recognition and track your confidence, your amount of swear words and a lot more of other statistics that were updated in real-time. Some teams were also allowed to use a 3D-printer and built a small self-driving car! And there were many more innovative projects to review (~300), of which, unfortunately, we could not see all.
However, for our demo/presentation, we decided to give our viewers a demonstration of an abstracted real-world with our car-images and makeshift street with parking lots. We explained our idea, while we continuously let the cars take different parking lots and showed how the status of the parking lots changed in real-time on our map. Additionally, we added a live preview of the analyzed photos and depicted the map on another laptop to show how the changes to the parking spots reflected on the map nearly instantly. After that, we showed our “Action on Google” and let the viewers see how the status of the recommended parking spot was marked “reserved”. Of course, we also let others to try us with edge cases: What if a car takes up two parking spots? And what if a car drives through on our street? As we have not specifically tested those scenarios before, we were pretty happy that our image recognition was robust and did not fail! Naturally, we made sure that both team members, who stayed at our table to do the demonstration, were capable of explaining the image recognition as well as our platform.
Although we did not win our challenge or track, we had to celebrate our result and finished the weekend with a dinner at Naughty Burger. Unfortunately the prices in finnish restaurants are very high and the servings very small. For this reason, we had to shop our second dinner in the local K-Market. After all, pizza is always a good idea - no matter the place! On Monday we spent our time left with sightseeing in Helsinki. We decided to visit Suomenlinna, a sea fortress on an island off the coast of Helsinki which was declared as UNESCO World Heritage site with a beautiful view over Helsinki and the Gulf of Finland. After being in the cold for so long, we headed for the Amos Rex at the Lasiplatsi currently exhibiting an installation by teamLab and the Frosterus Collection. We don’t want to tell too much, but can highly recommend everyone to plan a visit at the Amos Rex while in Helsinki.
All in all, Junction 2018 was a weekend full of all kinds of new technologies, innovations and learning.