Choosing Suitable Software
Last updated
Last updated
Used Research Methods:
Relevant Research Questions:
What technological solution at the Fontys Strijp TQ building can be used to stimulate the productive feel of the building users?
As mentioned in the chapter overview, I needed prioritization for my project deliverables. Therefore, I first had to make a decision on a suitable software technology.
For the buddy, I had the idea that it would represent a face. That meant the face needed to be animated to give a more alive feel to the buddy. Not everything could be communicated with the face which meant the toggling between the eyes and the face also needed to have some animation. An example of another screen is the time-picking menu (most left on the image below).
On the other end, the charging station was planned to look something like the image below. It shows the wall that displays the digitalized charging buddy (preferably I wanted the 3D environment also to be animated). Under the screen, I have put the wireless charging plates with the one buddy that is being charged. (and that is why it is appearing on the screen)
Knowing visually what I wanted to achieve, it turned out I would need some research in order to decide on the proper software technology.
Initially, I wanted to create the prototype in Unity as it supported soundtracking libraries (my client was going to use), phone battery levels (for knowing that the buddy is charging) and it would work well with runtime rendering 3D space. Moreover, coming to this internship after three game design semesters, I believed I had fresher knowledge of Unity Development compared to my main studies in Web.
However, when sharing my plan to develop in Unity during a Knowledge Sharing meeting with my Lecotrate Colleges (other students and teachers who are doing other projects for the Lectorate), the response of the people was that Unity is probably not the best option for the time I have left. That was because after removing a lot of the initial concept features, there was no need for such a strong environment. I wanted the freedom Unity was giving me with building my scenes but hearing the opinion of more than 5 other people, made me investigate and reconsider my plans. As it really appeared that Unity would be the more complicated and resource-heavy solution, I decided to research a variation that would allow me to create the effect I wanted from the project with Web programming.
As I wanted to be sure that the Web could give me what I needed, I conducted an expert interview with a Media Design teacher (and a Lectorate college). My biggest question was the following problem: The phone representing the buddy has a battery which means the battery status can indicate if the buddy is charging. However, the charging screen does not have a way to know if the buddy is charging. Therefore, one of the main questions was: How to let the charging station know what the state of the buddy is?
During the expert interview, I also asked for other leading directions on how to proceed in the manner of screen loading, animation libraries, etc. As I had the general direction of information and the client's wishes, I started researching on my own on creating the buddy face screens.
When deciding to continue the project with Web, I initially wanted to use three.js (interactive 3D rendering library) for the 3D environment at the charging station and even for the face (I could decide to create an orthographic view and rig 3D objects for the eyes). However, the idea of three.js is to be used for more interactive projects which for the purposes of the prototype was not needed. Knowing that the learning curve of three.js was bigger I decided to spend my last weeks more efficiently and work with more simplistic solutions so that I can showcase the project. In that case, I found another alternative - I was going to play already rendered animations to save time and still give the impression of how the project should look. For the face development, this method did not limit me in any way. The only limitation was in the 3D environment in case I wanted to create some interaction. As I had no such plans for now the rendered video option seemed quite logical (this solution came as a result of a discussion between me and the client)
Hence, I started looking into different options to play animations on the web. A good option that I discovered was to use a library called Gsap (javascript animation library) to play keyframes. I used a tutorial explaining the process of playing 3D render from Blender on a Website (tutorial link) to get used to the library. I created a simple video controller to test how the library worked. The gif below shows the result. With keypresses of different buttons (displayed on the corner of the screen), I could play a sequence of keyframes forward and backwards and then switch to another sequence (the slow video is a shorter keyframe sequence which makes it look slower). Having the first steps of this player gave potential that Gsap is the library that I was going to use for my animation.
The next small-scale prototype I made was to create a carousel that would act as the time picker. The prototype was inspired by the following video (link). In order to achieve the effect of a carousel the author of the video used a library called Swiper. Therefore I also used it to create the following small prototype. Again I used keyboard inputs - I was pressing <- and -> to toggle between the time presets.
At this point, I had the following insight:
The Project would be able to work nicely on the Web, I would be using HTML, CSS and Javascript.
The Face and the 3D animations would be played with a Gsap Library and the time picker would use a swiper. The toggle between the face and the swiper would also use Gsap (the result of that in the next chapter)
Future communication between the phones and the charging screen could be developed with the MQTT messenger protocol. However, decided by the client this feature goes outside of my scope in the timeframe of the project.
Having evaluated the technology that could be used, the goal until the end of the project was forming to be: developing the software of the buddy device (face, time picking, etc) and the charging screen (3D environment) separately (they would not know about each other's existence as MQTT is not used). The connection between the two devices could be faked for the purpose of giving the experience of a full prototype (by pressing keyboard buttons to trigger events on the charging station screen when the buddy is put to charge).
My next step was to create more small-scale prototypes in the area of the buddy face. I needed to decide on the way I would animate the faces (and the stype). Also, I needed to see how the functional screens (time picker) and the decoration screens (buddy face) would work together.
Showroom: Peer review
Library: Expert Interview
After some considerations such as using Nodejs Server, php, we agreed on an option to use MQTT message protocol. This was going to work well because the protocol has a publisher and subscribers. As the subscribers always listen, the charging screen could be a subscriber and listen to the phone state. Showroom: After this meeting, I shared the findings with my client and even though he liked the idea of the MQTT, he shared that he does want me to focus on the other aspects of the prototype, as he had prior experience with the MQTT and he could easily do it by him after the end of my internship.
Prototyping