So I started learning python and working through parsing the json data
for wiktionary. The goal: build a gopher interface to wiktionary - a
sister site, of sorts, for gopherpedia. Then, it dawned on me this
morning that my approach could possibly be wrong. How would I trigger
a script to run from a gopher page? I don't know, so I come here.
You should find the answer to this question in the documentation
of the gopher server you use. Depending on the exact implementation of
your gopher server it may or may not be possible to execute scripts
from within gopher pages. For example the way to do it with Motsognir
is explained page 15 of its manual:
<http://sourceforge.net/p/motsognir/code/HEAD/tree/trunk/manual.pdf?format=raw>
On 2024-08-21, Mateusz Viste <mateusz@x.invalid> wrote:
You should find the answer to this question in the documentation
of the gopher server you use. Depending on the exact implementation of
your gopher server it may or may not be possible to execute scripts
from within gopher pages. For example the way to do it with Motsognir
is explained page 15 of its manual: >><http://sourceforge.net/p/motsognir/code/HEAD/tree/trunk/manual.pdf?format=raw>
As an other example, gophernicus supports the the CGI/1.1 standard
(within a dedicated folder in the server). Even gophermaps can be
processed as CGI script if made executable. Check: https://github.com/gophernicus/gophernicus#cgi-support
-f6k
Impressed with the ingenuity of gopher creators out there, I decided to
give it a go. A source of inspiration is gopherpedia.com. A tool I use
daily, absolutely love it!
And after reading his phlog post about it, I saw that he created a proxy
and wrote code to scrape the data from wikipedia and output to a format compatible with gopher.. Fair enough.
I thought it was too much work. Why not just run the script locally and
use the wikimedia api? Right?
So I started learning python and working through parsing the json data
for wiktionary. The goal: build a gopher interface to wiktionary - a
sister site, of sorts, for gopherpedia. Then, it dawned on me this
morning that my approach could possibly be wrong. How would I trigger
a script to run from a gopher page? I don't know, so I come here.
Do the good people concur with my opinion? Or is there a way I can do
it the way I envision? Right now, I'm using python and, at this
point, my script is pretty close. A query will search, filter, then
format the output to standard ascii. But, i'm afraid that the work
I've done is useless.
Here's the flow:
User goes to gopherict.com > word lookup > type word > press enter >
script executes query to wiktionary.org > downloads and filters >
creates gopher page of resulting definition > user displays
I hope you all have good news for me.
Daniel
Not sure this is part of the gopher protocol but ...
My server is gophernicus and it can execute CGI scripts I have pages
where you enter something then a script executes and returns something relevant as a gopher page.
I'm working on a sister gopher to gopherpedia but instead an interface
to wiktionary.org. Instead of creating a proxy and scraping
the page, I'm using the wikimedia api. Getting close to a functional and stable script.
> Not sure this is part of the gopher protocol but ...
> My server is gophernicus and it can execute CGI scripts I have pages
> where you enter something then a script executes and returns something
> relevant as a gopher page.
This would be a server implementation detail, not a protocol detail.
My frustration was that there was a selector type for "search returning directory", but not one for "search returning document". I tried to
convince McCahill and crew to add such a thing, but they had developed
the Gopher+ concept, and weren't interested.
De
I'm working on a sister gopher to gopherpedia but instead an interface
to wiktionary.org. Instead of creating a proxy and scraping
the page, I'm using the wikimedia api. Getting close to a functional and stable script.
In the early 90s, I built a gopher gateway to usenet. It was its own standalone gopher server, and spoke NNTP to the campus news server. The tradeoffs are that you have to implement at least some of the gopher protocol, but in return you get to maintain state about the back end connections, cache things, etc.
Interesting. What happened to it? I"m sure your campus no longer
operates usenet?
Interesting. What happened to it? I"m sure your campus no longer
operates usenet?
We shut it down before 1998, along with the public gopher service
we ran. Pretty sure I had the code on a tape, but I think I've misplace
the tape. :(
De
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 491 |
Nodes: | 16 (2 / 14) |
Uptime: | 122:47:23 |
Calls: | 9,687 |
Calls today: | 3 |
Files: | 13,728 |
Messages: | 6,176,829 |