|
|
|
## Will Billingsley
|
|
|
|
- Phone: +612 6773 2513
|
|
|
|
- Email: wbilling@une.edu.au
|
|
|
|
|
|
|
|
### Research Projects and Special Topics
|
|
|
|
* (Can be undertaken in COSC301, SCI501 and SCI502, SCI500 and COSC593)
|
|
|
|
|
|
|
|
### The phone-guided vehicle
|
|
|
|
|
|
|
|
Some years ago, Prof John Billingsley (USQ) developed vision guidance algorithms for tractors that also used a unique way of working with the GPS. Earlier this year, he proposed that it should now be fairly simple to get this up and running on a phone, controlling a robot via BlueTooth.
|
|
|
|
|
|
|
|
I have a few more plans for what to do with this afterwards, but the start of this project would be to get a BlueTooth-controlled vehicle driven by a phone (that sits on the vehicle -- the phone is the onboard sensor and processing bundle).
|
|
|
|
|
|
|
|
|
|
|
|
### Multi-modal monitoring
|
|
|
|
|
|
|
|
Speaking of phones having many sensors...
|
|
|
|
|
|
|
|
There have been quite a few in-home monitoring projects that are about video. I want to go multi-modal. I know the shower's on because I can hear it... (so I don't have to put a privacy-invading camera in the bathroom). We're proposing a PhD topic around this general problem, but we can bite of small parts as thesis and masters topics.
|
|
|
|
|
|
|
|
|
|
|
|
### Learning analytics on richer data
|
|
|
|
|
|
|
|
There's a lot of buzz about learning analytics. But normally the data doesn't go much deeper than forum posts, site visits and activity, etc. In COSC220, we have huge great trails of data -- every issue, issue comment, code-change, run-of-the-tests, etc. We don't just see the students' submission, we see much of their development along the way. So, what can we find out when we can mine into the students' assignment work in progress?
|
|
|
|
|
|
|
|
|
|
|
|
### MathsTiles 2, and "Balance method vs Inversion method"
|
|
|
|
|
|
|
|
MathsTiles was a little something I cooked up back in the early 2000s. It's like Scratch-for-mathematics, only it can scale from algebra all the way up to complex number theory proofs.
|
|
|
|
|
|
|
|
A colleague in education (Bing Ngu) has been doing research on teaching algebra using the "balance method" or the "inversion method". These are conceptually fairly easy to represent and try out in a tile-like-environment. So, let's do this...
|
|
|
|
|
|
|
|
There's a few other tricks I'd like to use so that students learn something about programming along the way (solving an equation is an algorithm; I'd like them to take them from it being a task where the mechanics of the system guides them (which tiles you can move) to one where they discover the rules of the algorithm, and can express them using a Scratch-like language.
|
|
|
|
|
|
|
|
|
|
|
|
### UI components for an AI-enabled world
|
|
|
|
|
|
|
|
RamSelect lets users weight particular genetic characteristics of a ram they'd like to buy (wool fibre thickness, etc). But these characteristics have relationships between them.
|
|
|
|
|
|
|
|
It strikes me there's two interesting HCI problems here --
|
|
|
|
- controls for setting weightings on a touch device (I have a different idea of how to do that)
|
|
|
|
- how to express a complex relationship between variables to users
|
|
|
|
|
|
|
|
This would be a typical CHI-by-the-numbers project. (Build a few different versions representing different interaction styles; run a controlled experiment with some measures and a survey; report chi^2 on the categorical stuff and T-test the quantitative measures).
|
|
|
|
|
|
|
|
And if you get through that fast enough, there's another one on "qualitative uncertainty" in predictive models.
|
|
|
|
|
|
|
|
|
|
|
|
### Design-by-spidery-organisation
|
|
|
|
|
|
|
|
We have a very large project with the Sheep CRC that is on a mission to change how graziers do sheep-raising (to get them hooked on the idea of using data and predictive algorithms to understand risk). This is a technology-led project, that spans multiple organisations, that is on a design mission (to change how people work). I would like to interview the members of the team to do a case study of multi-organisation technology-led design.
|
|
|
|
|
|
|
|
This project requires very good communication skills -- it is all about rich, detailed interviewing and qualitative data analysis.
|
|
|
|
|
|
|
|
### DeepMind's Atari games but smaller
|
|
|
|
|
|
|
|
We've had a masters student working very successfully on trying out DeepMind's deep learning system for Atari games in unusual situations. For example, train the AI on two different games and see if it refines the network or "unlearns".
|
|
|
|
|
|
|
|
But these take quite a while to run -- partly because it involves simulating an Atari. Can we do something smaller? Get a deep-learning based system for games so it can run in hours rather than days? (e.g., not simulate an Atari but run games in the system with a lower pixel count)? And if so, can we start to characterise "pedagogy for AI" -- how to optimise what to train it on?
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
### The forkable, compilable course
|
|
|
|
|
|
|
|
I have longstanding plans to make education work a lot more like open source projects. And it just so happens there's a UNE activity that we could do some of this around...
|
|
|
|
|
|
|
|
At the moment, the way we design and publish courses is largely manual. Edit things in Moodle. Type up Word forms with learning outcomes and send them to a committee. Yuck.
|
|
|
|
|
|
|
|
In my view of the world, courses are a little like code -- we want to be able to check them (for accreditation), diff them (for proposed changes), produce code that runs on top of them (automated course-planners).
|
|
|
|
|
|
|
|
It's relatively easy to produce code that can do this, but the tricky part is getting engagement from other stakeholders across the university. So this project would involve a lot of showing prototypes to stakeholders here and elsewhere, and conducting semi-structured interviews around a technological artefact.
|
|
|
|
|
|
|
|
|
|
|
|
### Data affordances
|
|
|
|
|
|
|
|
"Affordances" are a common concept in HCI. Early in 2016, I put a "provocation paper" into ACM's Designing Interactive Systems conference about data affordances. The key part of a text format is that it's diffable. Particularly, I'm interested in these as a way of breaking design constraints.
|
|
|
|
|
|
|
|
This project would involve a literature review and survey, trying to discover why people chose their data formats across a number of fields -- to see if we can gather a collection of data affordances. The target would be a ToCHI or CHI paper.
|
|
|
|
|
|
|
|
### Dimensions of Unreasonability
|
|
|
|
|
|
|
|
This would be very lit-review-heavy...
|
|
|
|
|
|
|
|
One of my research areas from my PhD was Human-AI interaction. They both have very different ways of thinking about things, and I discovered in my PhD that AI can become a usability problem. Modelled after the idea of Cognitive Dimensions of Notation, I would like to discover a set of "dimensions of unreasonability". What are the ways in which an AI system can be awkward to reason with? |
|
|
\ No newline at end of file |