How the Federated Wiki might be related to the wider Hypertext world (at least one way among others to look at it until retracted): page
When navigating to a site on the Federated Wiki for the first time, the site is displayed on the left as the lineup grows from left to right. That’s a little bit confusing, because reading (in small columns of course) is usually done at the center of the screen (eye focus) and the site on the left can easily be confused with a side menu of an empty work space in the center. On the other hand, an interface designer needs to be careful to not only replicate what is already known to people (even if it’s for the sake of ease of use, easy adoption). Expectations can and should be deliberately broken to establish new, potentially better ways of interaction. So the question is if the visual appearance should follow old conventions, bridge between old and new at the expense of being confusing (but there can be good implementations that are self-revealing) or be entirely new to force the user to (re)think. A serious user can be expected to learn something new to benefit a great deal from it later, but it might discourage many people who are trained to live in a simple consumerist way. Luckily, with the computer, one can have all of those worlds at the same time interchangably.
I found the schematic portrayal of the Federated Wiki (page 9:35) confusing for some time because the second, smaller computer in the back drawn over “federated wiki” doesn’t seem to do anything at first while the clients drawn over “traditional wiki” are indeed representing independent parties/entities engaging in an activity. It’s visually difficult to spot the difference between the two models as on the top, there are always multiple computers drawn, but the number of server disks is limited to a single one in the traditional model. For a software developer, it’s not necessarily obvious why the clients in the traditional wiki model shouldn’t be able to talk to other server disks as well at another point in time as the server disks are in no way different in both models, but if only one client would be drawn for the Federated Wiki model, it would become clear that the focus is on a single client pulling data from several server disks, while in the traditional model the focus is on a single server disk with several clients pulling from it. Only later when Ward explains refactoring/federating (page 10:42), the foreground client in the Federated Wiki model starts to write to his own disk and reads unidirectionally from the other two disks. In the previous slides, the arrows were read-write, and as the disks and sketches seem to represent the same environment/situation, one could assume that the client owns the other disks too. And in the end, the client in the background pulls data from the three disks again, not different than what the foreground client did first (except for the write access to all three disks, is that an important distinction?). I assume that in the last interaction, the background client is retrieving copies from the disk the foreground client owns, who got them from the other two disks he doesn’t have write access to. I mean, I think I understand the actual concepts behind it, but I’m used to very explicit statements not unlike protocol transactions of who owns what, who can or can’t do certain operations, from where to where the content goes, and visually separating elements that are different. Maybe it’s just my old way of thinking that lead to some wondering about what the presentation actually shows, it very well might need to be deconstructed deliberately. But even now, I continue to look at it as interactions similar to source code hosting as pioneered by git/GitHub: there’s an original data collection, I copy it for myself and then others can get my copy of the data collection from me, either with my modifications or unchanged. Those who receive it from me might be the original authors or other people who branch off their multitude of copies, each on their own machines/disks they’re in control of and not a centralized entity, by picking and merging the best parts and ignoring the rest. In a sense, this is the nature of the network anyway and a lot of technical and legal effort is necessary to fight it.
The “Welcome Visitors” page is the default “landing” page that gets opened/loaded when navigating an URL running a Federated Wiki instance. This page is analogous to the index.html/index.php that gets automatically served by a default Apache webserver configuration if no particular sub-resource is specified. If you spread the billboard URL of your Wiki instance, make sure that the “Welcome Visitors” page contains links to the sub-pages you want users to be able to find. I’m not yet aware of another mechanism to learn about all the original pages and their URLs stored by a given instance if they’re not linked on pages that can be reached via “Welcome Visitors”. That might be a feature and some sub-pages could be reserved to be only linked from the orginary web in order to form separate navigation spaces without bridges to get from one to the other even if they’re hosted on the same Wiki instance.
I don’t like browsers, servers and the web because of the CORS nonsense, lack of interfacing with the local hardware, operating system and software environment as well as the dependency on centralized services, which is why I’m interested in working with the Federated Wiki outside and besides the browser. Now, as the actual data is all serialized in JSON, there need to be ways to retrieve it similar to how the browser does it. With the “Welcome Visitors” page as some kind of “.well-known” bootstrap discovery policy and every domain.tld/view/welcome-visitors having a corresponding domain.tld/welcome-visitors.json, I hope it shouldn’t be too difficult to develop a native client for the Federated Wiki, so a generic retriever would only have to find out that a resource URL in fact denotes a Wiki instance path, so the right handlers/interpreters/converters can be triggered to make sense out of the data.
I’m pretty convinced that software developers will immediately understand the concept if described like this: Where MediaWiki is the SourceForge for writing web documents collaboratively, Federated Wiki is the GitLab of it; with the additional strong encouragement that everybody should run their own Wiki server instance to host their websites. Sure, some more work might be beneficial to offer improved interaction controls, checked/verified/versioned/dependency-managed/libre-licensed software repositories to offer plugins (similar to GNU/Linux software repositories or an app store, something what Engelbartians would call “tool system capability infrastructure”, from which components can be installed automatically or based on policy or manually, to stay available on the client permanently or just for some time or as long as documents are stored that reference the plugin), and of course building/curating the digital universal library of canonical and not-so-canonical works that must be libre-licensed to be of any use as a common good.
A default “Welcome Visitors” page isn’t well-formed. The header tags <link>, <meta> and <input> aren’t closed without reason. The class-attribute is used multiple times, probably because of an arguable design flaw of XHTML.
There are a few things I didn’t realize immediately, for example: refering to other pages via an internal link. It worked while I was authoring because I had the target pages loaded in my browser, but for new visitors, I later found out that those links wouldn’t resolve nor did I know of a way to point them to the target except with a link to an external destination. Only after looking at the Federated Wiki as decentralized servers or the
git/GitHub repository forking model, I finally understood that it would be best to copy/fork the pages in question to my own subdomain instance and leave them there unmodified, so users would be able to retrieve them on first encounter without the need of having other servers visited first, and from the history and origin of those pages, visitors would be provided with entry paths to explore the the entire interconnected federation instead of being stuck on my node.
When adding a new paragraph, I didn’t read carefully that the factory not only offers some specialized applications/functions, but also formats like HTML. One question is if a selection can be changed later, or if the selected “paragraph type” is semantically marked with a MIMETYPE or (XML) namespace identifier, but I guess I saw that the JSON contains such an indicator, which is very useful.
I often accidentally leave the edit field because I want to copy&paste typographic characters, need to look up a word in the dictionary as a non-native speaker, want to copy an URL to insert a link, copy text or navigate pages of my lineup, and every time, the draft I’ve written so far gets saved as a change, potentially “polluting” the revision history. I’m not against that, to the contrary: if this is already what’s happening, then we could track changes down to the character, but maybe only save the “effective” changes or everything all the time, or provide some control over work in progress vs. publication of the final form, or the editor should send the last local changes to the server immediately and publish them (for autosave), but in case a consecutive editing operation follows to it (based on time, GUI interaction events, character position of the operation, type of operation?), the previous version gets replaced by the later one. Maybe those many changes are already just local ones, but if I close the browser, I guess I would lose them if they don’t get send to the server, wouldn’t I? Maybe the versions I see right now aren’t the many, many small operations as they look like, but the major changes I published, but it doesn’t look like it, didn’t investigate what’s actually going on more closely. Anyway, in the end, I guess we want to get away from heuristical diffs and develop a new approach, that’s probably a goal worth working on in the future, for the Federated Wiki as well as for text editors and writing tools in general.