History
Formerly part of UBC CPSC210-2018W-T1.
Background
In Winter 2018, a new personal project component was added to UBC's CPSC 210, replacing the so-called Android project. This time, we were given the freedom to do anything we want, so long weekly requirements are met.
My decision of making an image sequence player for Java actually came from a JavaScript project I did for Langara's CPSC 1045 two years back, which also had a similar project component albeit much watered-down. It was when I made my very first iteration of MotionJPEG playback, in which each frame is a JPEG image file. I managed to get it clocked nicely at 15 fps--it was much easier compared to what is here!
Over time, I would visit that project every now and then. I managed to strip off the MotionJPEG part out of its single, 2000-line JS file, and turn it into a standalone project. By that time, I had already taken an Object-Oriented Programming course, and had been fond of Java as a well-documented programming language--for the most part. This meant forgetting JavaScript, and hence no longer being able to pick up the pieces left in it.
By the second week, we had to decide on our projects. I thought that it may be a good idea to do something equivalent to simple-comms. If I can get a messaging application working in a Summer, why not do the same here? As a head start, I already had a working audio engine and a small API for reading EDEN configuration files (similar to JSON but--according to my peers--much, much worse). With that, I decided to breathe new life to my work on MotionJPEG.
Development
I know a little about model-view-presenter, and use it as a way to structure
the architecture of this application. Sequence
(model) is the very
first class I wrote, and then SequenceWorker
(presenter) came
after.
I then proceeded to make a new audio engine based off the old one (that left the mixing to the OS), and expanded my EDEN configuration file I/O library to accommodate writing such files and introduce polymorphism to meet weekly requirements. I took too much time for these two alone, and before I knew it, I was about to introduce myself to the messiest part of this project.
Work on the video subsystem began in late October 2018. Originally, I based
some of it of the JavaScript version I did a while back. The first development
release with it attached is Development G. It had infinite loops that do nothing
until a certain Boolean changes value. Despite that, it still managed to play
any sequence I throw at. I consulted the lecture TAs and learned about Object.wait()
, Object.notifyAll()
, and their involvement in
concurrency. Thanks to them, Development H brought much improvements into the
video subsystem.
When I was performing some stress tests, I noticed that it only used about half of my CPU power, and that it struggled terribly at 1080p60 (as expected). So I decided to step further and thus began the messiest part: multi-threading.
I had to split the task of reading image files into n number of threads, where n is a positive, non-zero integer. While that was taken care of rather quickly, I ran into a problem in which the threads do no coordination between the each other. When they are called to synchronize with the audio, each may respond differently--depending on what they have buffered--and may break the consistency of the task as a whole. The only way I could get over this is to have all threads clear their buffer on every call for A/V sync. That way, they would have equal responses to synchronize calls, but it caused a huge drop in performance, making it only a notch better than the single-threaded subsystem.
After days of bug-squashing, self-learning, and TA assistance, Development I was the very first version to employ the multi-threaded subsystem. It is also the very last of the developmental versions.