See my other page on predictions for the future, some of which look like invetions, however distopic.
Transportation System: Laramie/Future#Computer-Routed_Modular_Rail_Transportation_and_Shipping
Personal Experience Streaming
Wearable cameras with front/back/side views, and microphones, and a voice-recognition user interface.
Two optional modes:
- run all the time and log to local disk, log to web service when possible
- run all the time, but only save when user speaks command or presses button (or gestures into camera) and then save indicated previous minutes from cache.
Allow indexing/tagging in realtime
Flickr VJ Application
Allow Video Jockeys (VJs) to browse catalog sets locally cached, and use favorites, with tags, and select them rapidly to match music pulled from tagged Global Music Library
Streaming OpenSource/Micropayment Global Music Library
- The music library must have the depth of MySpace, but not be ad oriented or as clunky
- It must support
- rapid searching
- Users may upload songs, and provide Commons licensing for:
- whole song
- clips of song
- related images
- related text
- Users may attach rich meta-data:
- Websites, URLS
- text, lyrics
- calendar info:
- song recording dates
- song performance dates
- song/group upcoming performance dates
- Unique ID/Tags for finding info about songs, re-uses, and upcoming shows, etc.
Personal Application Spaces
Group and personal web space, where users may build not only homepages, but applications to share, using rapid AIDE: AJAX Integrated Development Environments. New appliations and widgets may be created and exported, and shared with the community at various levels: limited distribution, beta, and full community.
I'm working on this now, using my web component framework: Dynamide.com
Lightweight Concrete HousingI'm actually actively working on this one, in a way to make the construction highly modular, aesthetic, and cheap, so I don't want to divulge pattentable stuff. But the public ideas are being used by lots of people now: concrete in densities down to 30 Lbs/CubicFoot is being used to make buildings that combine thermal mass, thermal insulation, and fire-, water-, earthquake-, and bug-resistance.
I'm working on a concept, soon to be hosted at www.Documentus.org, that will be an Open Source documentation project. There are lots of people working on documentation for Open Source, but the results are varied. This site would have these goals:
- High-quality documentation of existing projects
- Open access for trusted editing and public viewing
Standard templates of help files and documentation.
A Contents page that
- wraps your existing documentation
- points at existing documentation
- shows missing documentation from best practices templates
A Tag cloud for your documentation, including standard tags (from a tag map) and your own tags.
New pages that supplement your existing documentation
Format: docbook, but somehow wiki-able, or group editable.
Ontological Tag Clouds
The current usage of Tags and TagClouds is that Tags are discovered by popularity, and to some extent, by either recollection or sequential search. Future applications will need to move toward a mix of ontologies, neural networks of tags, and overlays of personal ontologies and taxonomies on well-accepted open source ontologies/taxonomies.
Thus, you'll define your tags, and you'll associate them with existing tags. The existing tags will be linked, in the style of a thesaurus, or map, rather than a hierarchical taxonomy. Hierarchical taxonomies will continue to be useful for canonical description of areas of interest, and your tags and personal ontologies will map onto and lay over these. I'm certainly not alone in creating these concepts or implementations, but I've been thinking about them for a while. You can google ontology, "Semantic Web," etc., or get started reading about what other people are doing here: www.wordiq.com/definition/Ontology_media
Lately, I've been designing a realtime application for managing your Ontological Tag Cloud. Similar to Simpy, when you are creating a link, you can see possibly related tags to the right of the edit area. But in this application, as you type, related tags are aranged by weight on your right (or where ever) so that you can dynamically see likely categories.
Future Computer Interfaces
The return of the console-mode interface.
- Type as you go, with inline widgets, activated docbook-style.
- Control sequence to move floating widgets above or below input area.
- AI recognition of commands as you type
- No more forms - question-in-place wizards will guide you through document creation. with inline reporting.
Classic programming smarts extension:
- you code your first program to handle 3 widgets and 5 thingymabobs
- you code a program to handle X widgets and Y thingymabobs
In the future:
- you code a program to handle 3 widgets and 5 thingymabobs
- you code a program to handle 3 widgets and N thingymabobs
- you code a program to handle M widgets and 5 thingymabobs
- you write a dispatcher AI to figure out which algorithm to run, or to run multiple and offer the results as imprecise.
Basically, we want to start re-using code by not throwing it away, but by running stuff by factory/dispatcher and by running in parallel, and comparing answers.
The Future of Computing
Computers are currently limited by complexity. Complexity management is what software people spend most of their time battling. Complexity and Change Management, not problem solving.
In the future, rule-based engines will fire rules that pertain to each situation, and the situations will be determined by more rules. You will specify the rules and test them against use cases, and that will be the program. This may involve some AI, or it may involve so much computing power that graphs of rules will be no big deal to wade through.
Polymorphism is dead. It is a hack to allow people to change a class without changing it or breaking previous users. Bring back the large, procedural methods with giant CASE statements. These are sequential rules.
To manage this, we will need systems that allow us to modify the code, a la COPY-n-PASTE inheritance, which is what everyone does at Application-Programmer shops anyway, and then track the changes so that the pastes are all virtual, and when you change a rule, the system will know which original code you intended to modify. Tests will be built as rules, and when you make a change, all the rules and tests will fire, and you'll know what you changed. You will need two way (digraph) change notification of ancestor and child code.
In order to make drinking containers that are not plastic, nor aluminum, make a glass bottle that is
I believe a composite glass with micro air bubbles would suit this well. With a metal shell, it would be a superior thermos. It would seem to me that carbon would be a useful composite (inert, hard, durable, chemically stable), if it can be made into a composite with silicon glass, or even a non-glass phase of silica/silicon.
Various air-injection, or bubble making, techniques would have to be tried, and therein is probably the biggest patentable technology needed.
In the future...
CGI will advance to the point where movie stories will be complete scripts: action, motion, blocking, dialog, facial expressions, intonation--all will be captured and digitized, parameterized, and stored.
In the same way that the character Woody in Toy Story is generated from scripts, all characters will be scripted. But with ReverseCGI (tm), those scripts can be back-calculated from any performance. And, in fact, all the great Hollywood movies will be parsed. These will be fed into vast libraries of actions, characters, and scenes. To build a movie, then one just needs to annotate the script with necessary blocking and expression. Then, you select your actors, scenes, and voila! A new movie is generated. Or with RoundTripCGI (tm), an old movie is edited with new dialog, plot, replaced actors, etc.
At first, only high-end shops will be able to do this, but eventually, it will be simple enough to do by casual users, like mix tapes.
Since the algorythm will know how to parse Elizabeth Taylor's facial expressions as she delivers line X, her tone, inflection, and expression can be cast on any line of dialog. It will be possible to pair her young self with a young Brad Pitt in a remake of Sleepless in Seattle. Egad!