Jump to content
Wikimedia Meta-Wiki

Wikipedia Client

From Meta, a Wikimedia project coordination wiki
This page is kept for historical interest. Its content is outdated or may be wrong. You may find more up-to-date information at on the www.mediawiki.org website.
A proposal to move this page to MediaWiki.org was rejected.


See also: Dedicated Wikipedia editor, Machine-friendly wiki interface


Some client-side readers/editors are currently being developed.

Wikipedia clients

[edit ]

WINOR

[edit ]

(削除) WINOR is currently under development. Contact Magnus Manske if you'd like to help or test. (削除ここまで) No longer available.

WWW-Mediawiki-Client

[edit ]

WWW-Mediawiki-Client has been and continues to be in development as a Perl library and a small wrapper script, which behaves a lot like a CVS client, allowing users to do updates to download Mediawiki content, and commits to upload content.

Comments

[edit ]

I am unsatisfied with using a web-browser to edit content. I am unsatisfied with switching between editors and other applications to use Wikipedia. I am unsatisfied with the latency and pull nature of the "RecentChanges" link. I am unsatisfied with the lack of immediate collaboration on articles, i.e. people working on the same document truly simulateneously.

These are all limitations that are imposed on the Wikipedia user by web interfaces, and they are not going to go away anytime soon. The only solution is to create a client that can access Wikipedia and which provides these features. Such a client would not replace the existing web interface, of course: it would merely be an additional, more effective way for advanced users to use Wikipedia.

Specificially, the client should

  • include a (visual) editor with Wikipedia-specific features and allow integration of alternative editor components
  • automatically fetch RecentChanges updates in regular intervals
  • make it very simple to navigate revisions, for example with a two-panel window that highlights differences
  • integrate with the Wikipedia IRC channel
  • when two users open the same document with the client, allow collaboration mode: a chat is opened, and one user can assume control of the cursor and mouse
  • cache articles locally and allow offline browsing, searching and indexing of wikipedia tarballs
  • do anything else you can think of.

To implement such a beast, at first one would only be able to hack the existing Wikipedia using the "protocol" as defined by the HTML layout and forms. This is a very unsatisfying solution, and eventually, the Wikipedia server should learn to accept standardized XML-RPCs.

The actual implementation could go forward in several steps. At first, a simple browser/editor component would be the most important: Being able to navigate and use Wikipedia without an actual web-browser would be nice. More advanced features could be implemented later.

I would personally very much like to use such a client, and possibly be interested in writing it (in my case, likely in C++/Qt or C/GTK). Is there any broader interest in such a tool?

I started a Wikipedia editor as an open source plugin for Eclipse, see http://www.plog4u.org
To get such a "complete beast" running, one has to look into other plugins, to get something like a "Wikipedia suite" of plugins.
For example for collaboration this plugin may intersting to start with: http://sobalipse.sourceforge.net -- Axel 19:40, 7 Feb 2005 (UTC)

Oh dear god, not XML-RPC! I agree that having to parse out the table of contents and extra links and so forth is bad, but there are ways to do what we need via plain old HTTP. For instance, to get the raw data for a page, you might use a URL like "http://www.wikipedia.org/wiki/some_article?output=raw". It would also be nice to be able to get the page as HTML, but without the extra links, like so: "http://www.wikipedia.org/wiki/some_article?output=html". - TOGoS

I agree that XML-RPC (or SOAP or WebDAV) feels like overkill. A simple XML return format for things like recentchanges, history, and metadata (author, revision date) might be useful, but plain old HTTP should provide all the calling semantics we need. --Brion VIBBER 01:34 Dec 11, 2002 (UTC)

You're right. I like to complain about XML (I think XML-RPC is especially over-hyped), but it has its place. Anyway... What I'm really thinking is that the interface between the server and the client should be as simple and intuitive as possible. For instance (to expand on what I already said), to update a page, you would just do a PUT (or POST, more likely) request to 'http://www.wikipedia.org/wiki/article_name' with the new raw wiki data as the body of the request. Right now, with the web interface, you have to post data to some weird, non-inintuitve URL. I know the client would take care of this (and I actually don't really mind it for the web interface), but having a simple, intuitive, not-bound-to-the-implementation client-server interface has a number of benefits:

  • makes the system more modular
  • easier job for the implementors
  • we don't want our computers to end up speaking the equivalent of English, do we? (go loglan!)

So I think it would be worth it in the long run to do a little extra work to get the server to understand a more intuitive 'language'. What'sit running? Apache...? Might take a little work. But hey, that's what Ruby's for. Write your own server! (Perhaps this rant belongs on another page. Feel free to move it if you think so.) - TOGoS

In Defense of XML-RPC (SOAP)

[edit ]
  1. XML-RPC (let's say SOAP) may seem like overkill, but I predict that using standards will pay off in unpredictable ways. Instead of people having to learn about our oh-so-minamlistic HTTP based protocol, they can just leverage what SOAP skills that have now.
  2. Relying on a standard gives us room to grow because we can trust that the standard-makers have had a little more time to think issues through. It would be embarassing to find ourselves gradually extending our minimalistic protocol, little piece here and there, and before we know it finding that we have just reinvented SOAP.
  3. Now (Nov 2003), SOAP implementations have reached the level of maturity that we don't really have to do any extra parsing work ourselves.
  4. XML-RPC and SOAP have ready-made libs on all moderns langages (php, python, perl, java, C, C++, C#, visual basic, ruby ... even javascript) and are integrated in large edition frameworks (like OpenOffice.org, Mozilla/XUL, Eclipse and so on). So those would accelerate clean and fast integration on tierce applications (eg. even making a MS-Office based client would be easy through visualbasic/soap).
  5. It became a common choice for other wikis softs opening their API (at least for phpwiki, moinmoin, jspwiki, twiki, usemod and openwiki), so this lead to unification in some way.

I think it would be good to get an cvs access to the database. The list from above:
  • include a (visual) editor with Wikipedia-specific features and allow integration of alternative editor components
No benefit by using CVS.
  • automatically fetch RecentChanges updates in regular intervals
Would not be needed anylonger as a cvs diff already gives that (and more) information.
  • make it very simple to navigate revisions, for example with a two-panel window that highlights differences
You mean cvs up -D yesterday -D "20 days before"?
  • integrate with the Wikipedia IRC channel
There are many IRC clients available. I don't think it's needed at all
  • when two users open the same document with the client, allow collaboration mode: a chat is opened, and one user can assume control of the cursor and mouse
CVS already addresses that problem
  • cache articles locally and allow offline browsing, searching and indexing of wikipedia tarballs
That is the main concept of cvs. You check out a working copy which is permanently available to you.
And the other arguments:
  • makes the system more modular
I think more modular than CVS is not possible.
  • easier job for the implementors
As the server supports CVS there is nothing more to implement as everything else is already existing.
  • we don't want our computers to end up speaking the equivalent of English, do we? (go loglan!)
I would prefer German, but I don't really understand what you mean ...
--Bodo Thiesen

First, I don't quite understand the first paragraph. Can you expand on these problems you see? For instance, I don't see how it's a bad thing that we can't work on articles "truly simulateneously."

Wouldn't the disadvantages of such a tool (mainly, having to download it--whereas at present anybody with a web browser can edit Wikipedia) trump the advantages of requiring a Wikipedia editing program? --172.139.98.xxx (which is to say, Larry Sanger in the field :-) )


The existence of a separate client for editing pages (as opposed to merely reading them) might also serve as a useful pons asinorum for editors. --LDC


We could probably have a dual interface option: you can use your browser for quick access, or download the client for increased functionality.

I'd be interested in such a beast. However, I'm not sure about the real-time collaboration mode. I don't think everyone gets along well enough to work in real-time. ;-)

For the record, the Opera web browser has a feature to automatically reload any page at set intervals. --Stephen Gilbert

Oh, one more thing. Have you ever used the wxWindows library? (http://www.wxwindows.org) It's a C++ framework designed for cross-platform programming. I'm told it plays nicer with Win32 than either QT or GTK. --Stephen G.


I'd recommend writing something that would work as an add-on to Mozilla. I suspect most of the functionality desired is already built-in or is available in other plugins already. Also, it seems to be the easiest way to construct platform-agnostic software these days. --TheCunctator


Wow, thanks for all the feedback. First, my idea was never to replace the web interface, but only to provide an additional way to access Wikipedia for regular users. However, I think the client could have some features that the web interface does not offer (e.g. collaborative editing), which would always be optional for all concerned parties to use.

I'm familiar with Opera's nice reloading feature, but for the client I would, at least in the long term, envision something different, namely a refresh that only shows changes made after the last load. This would require both additional client and server intelligence, but would drastically reduce the bandwidth required for refreshes. The Recent Changes page, especially when very large, takes hundreds of kilobytes, and reloaded in regular intervals, generates quite significant amounts of traffic. Now, I certainly hope that Wikipedia's financial situation is good, but from a cost-benefit-analysis, it is clear that the money saved by having a low-footprint refresh would easily outweigh the time requierd to implement the server and client intelligence.

Regarding wxWindows, yes, I have seen it in action, but never used it myself. It may be a good alternative to Qt or GTK.

Mozilla expansion -- hmm, I'm not sure about that, there's nothing in Mozilla (not even the web browser component) that I would like the client to support, rather, it would render the limited subset of HTML Wikipedia uses directly. This would make it possible to write a small combined browser/editor component (what the web browser should have been in the first place). - Eloq.

Ah, I see. Recent Changes does indeed eat up bandwidth; it's been a source of problems for us in the past when floods of new traffic came in. w:Jimbo Wales could tell you more about that.
A mozilla plugin makes a lot of sense to me. I usually have a number of tabs open for references when I'm editing, and the editor component integrated with my regular browser would be much easier to use than having to constantly switch applications. -- Arvindn
You'll probably want to chat with w:Magnus Manske. He's the main guy coding our new Wiki software. Or better yet, if you're handy with PHP and SQL, head over to http://wikipedia.sourceforge.net and play with the code --Stephen Gilbert

The UseModWiki RecentChanges already supports 'List new changes starting from t' where t is the last time you loaded RecentChanges (actually, it's the time of the last change you saw). Others have patched UseModWiki to put this in the user cookie ala MoinMoin. It isn't very difficult. See http://www.usemod.com/cgi-bin/wiki.pl?WikiPatches/ChangesSinceLastVisit. -- SunirShah


Post from Intlwiki-l, Sat, 7 Dec 2002 17:54:48 +0100 (CET)

I'm writing this to the international list because I doubt the English Wikipedians care much about this issue. Today I received an email from a guy in Russia who wants to start his own Esperanto encyclopedia project because it costs too much to constantly be online just to work on an encyclopedia and he wants to distribute the encyclopedia on CD. I also know that we will NEVER have a Czech Wikipedia without it because there is a communications monopoly which requires all people (except very large companies) to pay for the Internet by the minute at quite high rates for the average Czech.

A Wikipedia client is desperately needed if we want more work to be done on the non-English Wikipedias. Users will be able to have the entire encyclopedia (in whichever languages they want) on their hard drives and it could syncronize when they connect to the Internet. Also, such a program MUST be non-English user friendly.

Having a user-friendly process (preferably offline) for people to translate the Wikipedia interface would also speed up the process of getting new languages into Wikipedia.

Could anyone please write such a program?


Thanks, Chuck


just some thoughts: a e-mail service for wikipedia, you send an email to a specific address (edit@cz.wikipedia.org for example) with a subjectline like:

SEARCH Tristan Tzara
You receive a mail with the search results.
Then you write:
GET Tristan Tzara (or whatever article title you want)
You do your modifications offline and then you send:
PUT Tristan Tzara, with your modified version in the body of the mail.

If nobody has changed the article in the meantime, the modifications are applied, else you get an editconflictmail...

To protect this service from spammers it could be limited to registered mail-addresses.

This would of course never work for the English Wikipedia, but for smaller Wikipedias, especially in such a situation as the Czech, it would be a possible solution which requires not much programming (at least I hope so - could this be done easily?). Complete this with compressed HTML-versions of Wikipedias for download and offline reading. --elian

If you don't send the changes in the form of the new version but as a unidiff, than most conflicts would be ignored. But that is already implemented and is called CVS, the only thing is that the Wikiserver must support that. --Bodo Thiesen

Bodo,

I like your idea of the "email search service for wikipedia. I think it would also be nice to have a search app for SMS users, such as Google's SMS service.

However, this could result in large returns being sent via the SMS. To further expand on this idea, I believe that additional tags would need to be added to each Wikipedia page, in order to "flag" the most important info for return to the cellphone.

In turn, the Wikipedia SMS Search would be very similar to Bodo's email search idea, however, less information would be retrieved.

Both the Wikipedia email search and the SMS search should currently be possible without any changes to the Wikiserver. There's already a search function on this site. The client-side software would just need to truncate the "wiki data". However, this would require additional communication between the client and the Wikiserver. Much of the processing would be moved to the client, however, a plug-in for Outlook would need to be created. Not sure if it's even possible.

A better idea would be to do the truncating and delivery preparation at the server.

Cameron


People talk about the implementation of "a client" here, but there can be multiple ones. Some want instantaneous collaboration, others would prefer to work offline, later merging their new stuff... I don't think ONE implementation should address all these. Rather, publish a "machine-friendly" interface spec atop of HTTP, using special URLs or whatever, and let people start their own clients. (and share them, if they want so) I've seen this debate "i like C++"/"I use Rebol"/"Did you check wxWindows" multiple times before... come on - each his own - we just need a simple machine friendly interface and things will get started. In fact, it should be easier to do (on the side of Magnus Manske / the Wikipedia coders) than a human-friendly interface.

Repost: Oh sorry, i'm dumb, i missed Machine-friendly wiki interface before posting...

Dirk


Dirk's point about multiple clients is good. One client is the offline one, which may be most useful in countries with slow/expensive Internet access. Such a client needs to give the user access to all the entries, or at least, the ones the user is most interested in (a slightly out-of-date version of the others might be delivered by CD-ROM or DVD-ROM, perhaps on magazine covers).

If the user has access to all the entries and can navigate them... then they've got a duplicate Wikipedia system, so the quickest way to implement an offline reader is simply to put a local webserver on the user's machine, running the Wikipedia software.

To allow the client to do offline editing, use the same interface that Wikipedia uses (the user accesses the local Wikipedia using his web browser), but modify the server-side code so that it stores the edited articles separately, and can upload them to thev main wiki server, and at the same time download new versions of all the articles the user is tracking.

-- Cabalamat 15:07, 27 Aug 2003 (UTC)


Whiteboard support would be great. Please look into other projects like jabber that could be used to relize such things.

offline editing would be very important for the developing countries, at different levels: clients should be able to synchronize with servers AND servers should be able to synchronize with other servers e.g. once a day. That way it would be possible to build up a effective infrastructure for information deployment. Protocols and code exist already, please use it.


If any of is going to implement anything, please have a look at http://www.flyingmeat.com/vpwiki.html where this already works and works good. -- Martin

Modularisation of MediaWiki into Services

[edit ]

One of the current implementation weaknesses is that MediaWiki is not designed as distributed Environment from Ground up. If I would need to rewrite MediaWiki it should probably developed as (more or less) independend parts connected by Webservices. The Base for this would be to creating at least an elementary Classes Structure. In the big Picture:

  • A Set of Webservices doing all the Database stuff, so it will be more easily to port to the different Types/Versions of Databases. Ideally the Code in Background could using Factory-Patterns to make easy Databaseadaption possible.
  • A Webservice for managing Users (Username, Password, User Roles, etc.). Because this is a task with high security requirements it should be used only via SSL or equivalent Encryption.
  • A Webservice for converting Wikisyntax into a (not yet existing) WikiML. This will make it (optionally) possible to use different Syntaxes, so the user can decide in witch she is more familar. Also the use of a XML based Syntax will make it possible to use XSLT to format the output.
  • A Webservice for querying changes in the database. This will make it possible to let the user figure out if the cached Version in an Article is old. This technique could also make it possible to develop a kind of distributed MediaWiki, where users have a wiki-client witch is asking for changes in articles. If there are none the cached version within the client will be used. Otherwise just a (set of) diffgramm(s) will get served instead of the whole data. This technique might save bandwith and processing power.

In the case of overhead of Webservices I don't see any problems, because for MediaWiki internal Classes it will be still possible to to call the appropriate functions directly. The Webservices will get implemented as Wrappers.
The bigger Problems I see is the choosen Language PHP. Because this Scripting Language was originally not build for large Environments or Webservices it has some Drawbacks:

  • It will run slower than compiled Code
  • Implementation of Webservices will be more hard. Webservices is more easy in Environments like ASP.Net, witch was build with Webservices in Mind. I'm recommending therefore to allow the use of Mono and Java, as long as there public functions have SOAP-Wrappers.

The Advantages of an Service oriented Architecture would therefore be:

  • Easy to extend by new Services based on the existing ones. Examples of such Services are the Specialpages.
  • New Features might need new Interfaces. But extending parts of the application won't effect existing services.
  • Easier distribution by mirroring just single Services over different Servers instead of the whole thing. Ie. User Handling might run on a single machine instead of get mirrored together with the rest of Wikipedia. Another Example might be the commons, witch might run on a different cluster witch in turn gets specialised on statical content.

MovGP0 00:31, 23 February 2006 (UTC) [reply ]

Initial efforts at a wkp-mode to view Wikipedia from Emacs

[edit ]

See my User page User:MwZurp. Still only does viewing but would like to extend to editing.

Reuse what OS has to offer

[edit ]

1. XULrunner (new Mozilla product) 2. FCKreator as a base for editing tools 3. Chatzilla extension for ICR colaboration

If Mozilla crue is on time with SVG implementation next level of data presentation is at reach. Choosing this approach will give us strong rendering engine and croos platform solution.

Modifying Existing Browsers?

[edit ]

What if, instead of building custom client software from scratch, we modify (extend, by adding custom features) existing web browsers (like Firefox, through extensions), and make them a full-fledged Wikipedia client? 61.94.149.55 03:23, 26 May 2005 (UTC) [reply ]

For example, see what w:flock (web browser) has done with integrating multiple blog services and clients into a browser. Any browser-integrated wiki client needs to be a bare-bones text-input place with buttons for all the typical functions. The buttons would not be defined at all at first, however -- you could select, say from a pulldown menu, a wiki that you were using (wikipedia? pm wiki? pb wiki? etc.) and the buttons would switch to that set of specific outputs to make sure that the font, etc. operators all worked for any wiki system. You could even let the users program their own buttons, etc. All sorts of macros and such would be possible. It'd be nice...better than these little wiki edit-pages by a lot, and still true to the spirit of integrating multiple functions into one area, and making it accessible for the layman. —The preceding unsigned comment was added by 70.116.16.157 (talkcontribs) 2009年03月04日T10:30:21 (UTC)

Access via WebServices

[edit ]

Long ago there was some discussion about implementing WebServices using NuSOAP, is anybody working on it? I would like to volunteer for that. Having Wiki accessible via alternate mechanism (such as SOAP, XML-RPC or even ATOM, RSS) could be of great use specially to handheld devices which have a limited display features. A compatible wiki-client can be installed on such devices and user can set preferences as of how the information (article, image, etc.) should be laid out. A well documented Web Services layer can be utilized by Desktop based client that provides extended features such as Editing, proof reading of articles etc. - AG

Take a look at http://meta.wikimedia.org/w/query.php. --Hartz 18:58, 5 September 2006 (UTC) [reply ]

AltStyle によって変換されたページ (->オリジナル) /