Here are a few projects I am working on, or interested in, with brief descriptions.
If you are looking for a projects to work on with me, take a look here, and get in touch if there's something that looks like it might be fun. We can tailor most projects to most individual needs, from PhDs to Undergraduate.
__ Trust and Clothing, and Art__
THis is one I talked about at IFIPTM 2016 in the dinner keynote. It's actually one I have been thinking about for a long time. What if we created clothing that could (somehow, say using LEDs?) reflect the amount of trust the user's device thought the user had in whomever they were talking to? (This could be based on shared profiles at that moment).
It could be reflected in the colour of the clothing, but perhaps not visible to the subject. Viewed from above, in a kind of walk-around party setting, or perhaps something like speed dating, this would be a rather nice visual reflection of the trust relationships in the room. Even better, if you asked the subjects for their opinions right after they stopped talking to each other, you could correlate your model's prediction with the 'real thing'.
Wise Computing and Computational Wisdom
Sometimes, it makes sense to trust even if you shouldn't. Sometimes, it makes sense to think twice even though everything looks good. These 'times' are difficult at best to recognise, and even if recognised, the problem is that it is hard to be able to say yes when no is obvious, or no when yes seems to be the 'right' thing to do. I see this as maturity, and I see it as an aspect of wisdom. So, Computational Wisdom is about understanding this wisdom in people better, finding out how to incorporate it in a computational setting (can we formalise wisdom? how? when do we apply it? what does it mean?). It's linked closely to the Mature Technologies discussed below (as an enabler) but it is a concept in its own right and ripe for theoretical research. I've just started working on it. Pretty soon there will be pages on this site around the topic. For now, I'm reading… If you want to play, get in touch!
The Regret Lens
The Regret Lens is a pretty basic idea which was originally based on a lack of storage for backups. It kind of still works when you consider that online (cloud) resources cost money, and you might want to do 'quick and dirty' backups of your most important files. What most important is is different for everyone, so instead of defining it for you, we want to do it with a lens that shows you the files or items on the system you are backing up. Like any interesting lens, the Regret Lens shows you things differently. With it, we can show you the items you would most regret losing, and so, back up those files, prioritised that way - back up the most regretted files first, then move down the list until you run out of space or time. That way, if the worst comes to the worst, you'll at least have fewer regrets!
Of course, defining what you might regret losing is an interesting problem in and of itself (we only move problems around, we don't solve them! The thing is, using terms like regret gives us the chance to think differently about the problems.
So, for instance, you might regret most:
- Losing the files you've worked on most recently;
- Losing the pictures from the past year;
- Losing the oldest pictures on your disk (they might be hardest to find on other devices);
- Losing the documents related to your latest project (school term paper, thesis…);
- Losing your contacts database;
- Losing the lecture slides you are putting together for a new course;
- Losing tomorrow's invited talk presentation;
- Losing your archived emails;
And so on. A true Regret Lens should allow you to determine those regrets up front and work with them (changing them when you want). Even better would be one that learnt what your regrets might be from day to day.
Mature Technologies
Okay, so I stole the term from Pratchett and Baxter's 'Long War' - but it's closely related to Device Comfort, which is addressed as part of this site and in the research group I'm building. The idea is simple: a mature technology in the Pratchett/Baxter book is described like this:
…such much-longed-for-by-customers products as computers with long battery power and fault-free software, machines that were your partners, not just a gadget for extracting money from you, not just an ad for some superior future version of themselves. Machines that seemed mature.
Terry Pratchett and Stephen Baxter, The Long War, p.43. (HarperCollins)
Yes, as in many things, science fiction points to what science could do (and borrows from it too, with a vast distributed peer-to-peer communications system, but that's beside the point). In this description, mature technologies are polished, finished. That's fine and good, and we should always aim for that, but the key here is the term 'partners'. The device comfort idea stresses that the machine can work with you, become something like your best friend in a handheld device. In other words, develop an understanding of you, build a relationship with you, and help you understand what it needs as part of that relationship — and by extension, what you might need in the context you and the device find yourselves in.
This needs a huge amount of work. But we can split it into several thrusts, which is what the device comfort project ultimately does. Slinging the 'mature' tag onto it makes me think of some extra projects that might make sense, and it brings to light others that may be of interest in a shorter-to medium term project, and I'll list them here as I go…
- User profiles for relationships with technology (which would also allow for sharing and migration across devices)
- information gathering about user - through observation, through interaction, etc.
Soft Security Through Trust
An oldie but a goodie - how can we design and iomplement security systems that explicitly and uniformly use trust a their sole means of making sensible decisions. This would mean, for instance, lower trust at the start of a relationship with a system, which might result in less access, or more guided help for access, or more observation, and so on. Behaviours that were sensible might increase trust(worthiness as judged), and so access rights.
Remember - trust is contextual, and based on risk. Various formalizations exist - but what makes sense in one context is usually different in others. Think about what that means with respect to heterogeneity in trust-based security models, and, as a result, strength. Think again about weaknesses and how to address them.
Then think: can trust or soft security ever supersede or even replace the 'hard' security we have now? And if not, why not?
Wa
This is something I first thought about way back in 1992. Still resonates though… Really just fertilises the comfort, matureness thing, but I found a descriptive document whilst trawling through old writings, and perhaps it lends itself to some thoughts here…
Paraphrasing:
Cooperation is an important aspect of everyday life for agents. Indeed, without it, nothing much would get done since most jobs can't be done by one agent alone. This much may be clear when considering a problem-solving model. Since each agent may have incomplete or incorrect knowledge about the problem in hand, the pooling of such knowledge will surely present a clearer picture of the problem and it's solution. In the physical world, alone I would find it difficult if not impossible to lift a table out of a room, but with two of us, the job is almost trivial. Cooperation, then, is of paramount importance.
In order for an agent in an agent-based environment to cooperate, there must be something in it for her. This is a bare definition of utility. Agents are assumed to be utility maximisers in that they will do what is best for them at each moment. If it is in their interest to cooperate, they will, but it follows that, if it is in their interest to defect, to upset the balance, to be malevolent, then they will be so. Morality is not an issue (Of course, this may or may not be the case when viewing humans as agents, unless we assume that moral behaviour raises the utility of an act for a moral human, or agent. Religions may give rise to apparently self-sacrificing behaviour for the same reasons - the reward is waiting in the afterlife, and hence utility is vastly increased). By the way, on the morality front, a look at (Danielson, 1992) would be of great utility.
Morality is necessary to foster non-self-interested cooperation. In order for this to exist, agents must feel some moral responisbilities to one another. Since morality is not an issue in rational choice, it must be forced. In other words, morality should be introduced into the society of self-interested agents in order to make it in their interest to cooperate (At least this. Moral behaviour of course consists of much more than cooperative behaviour.) How can this be done? The argument to explore here is that the introduction of harmony into agent-based environments, coupled with trust, allows a society to impose cooperative behaviour on non-cooperative agents.
Wa is a Japanes word meaning {\em harmony}. It is "that which enables members of a society, in the spirit of cooperation, to coordinate their efforts in the pursuit of public and private good." (Yamamoto, 1990).
In human relationships, wa is valued, but not always present. It requires more, in the form of mutual caring and trust. Shared self-interest is not enough, and is not even necessary should the mutual feelings be present (Yamamoto, 1990). This suggests that, even in the absence of shared self-interest, wa allows stable cooperative relationships to blossom and grow.
If you hear hooves, think horse before you think zebra.
Some thoughts (and project ideas…?)
- wa allows cooperative relationships
- wa needs mutual caring and trust
- mutual caring cannot be forced, can it? If this is the case, how can we enforce harmony?
- Morality may help our agents (Danielson, 1992). Moral agents may be more succesful
- Hence, it may pay to be moral, and thus caring
- Trust can be present. My formalism allows reasoning using trust
- With morality an trust, wa can be established among a small proportion of the population
- If this small proportion is more successful, other will join.
- wa will become a stable strategy, which is self-enforcing. Since it relies on trust, if that trust is broken, society will punish the offender (this is akin to work we did on enforcing regrets through system trust (Etalle et al, 2007)
- Is wa an ESS? Prove it either way?
- wa should be self-enforcing, and stable, and able to infiltrate a self-interested society of agents, at least in the long run. It should not allow infringement or infiltration by other strategies.
- wa isn't really a strategy, more a result of a behaviour set, one which follows on automatically from that behaviour set.
- Trust and Morality give harmony in cooperation. Trust helps in the enforcement of that harmony, morality helps in preventing infringement in the first place. Harmony is the result of stable, mutually trusting and caring relationships and societies.
Bibliography
- (Danielson, 1992): Peter Danielson. Artificial Morality: Virtuous Robots for Virtual Worlds. Routledge, 1992.
- (Etalle et al, 2007): Etalle, S., den Hartog, J.I. and Marsh, S. (2007) Trust and Punishment. In: International Conference on Autonomic Computing and Communication Systems (Autonomics), 28-30 October 2007. ACM Press.
- (Yamamoto, 1990): Yutaka Yamamoto. A Morality Based on Trust: Some Reflections on Japanese Morality. Philosophy East and West, XL(4):451–469, October 1990.
Location-based Transitivity
In abstract form:
Much of trust management and computational trust in its more practical aspects relies on the phenomenon of transitivity. In one way or another, transitivity is used to find experts, connect like to like, recommend something or someone, build a trust picture in a network of actors, and so on.
This is interesting because, while we might argue transitivity does work, it's something of a difficult proposal for people — outside of some very limited approaches (and lengths of chain from person to person) trust really isn't a transitive notion. But then, it seems to work in other applications online (including security and cryptography).
This project would approach transitivity from a different perspective, arguing that transitivity indeed does have additional use, especially in context. The context in particular for our purposes is that of location - thus, we propose a system where trust is more transitive in some places than in others. We examine how this might work in specific, using our Comfort Zones approach.
We could extend this by considering location as a placeholder for other similar contexts, in particular virtual location.