How Far Should I go?
wijnen at debian.org
Sun Oct 16 03:41:15 EDT 2011
On 16-10-11 06:31, cenobyte at dragoncrypt.com wrote:
> I am wondering what you all think about how far I, or any of us, for
> that matter, should go with hacking on the Ben.
I'd say if a hack is useful, you should do it. If you know that it will
be obsolete soon, do it only if it's not too hard.
> For one, Qi adapted
> OpenWRT rather than made a new distribution from (near) scratch. Was
> this due to time constraints, or something else?
I suppose it was because it's a waste of time to redo things which have
already been done by others. The most likely result of creating a new
distribution is that you spend a lot of time on it, but end up with
something that's worse than what's already there.
> Would it be worth it
> for someone to learn the MIPS architecture sufficiently well to make a
> Ben Tailored OS?
That all depends on the features of the new OS. If it's essentially
identical to GNU/Linux, it's a waste of time IMO. If you have features
in it which are very useful for the Ben, it probably is worth it.
I have started writing a new kernel plus OS, "Iris". It's on hold at the
moment, but I expect to resume working on it later this year. The Ben
is the first target for it, but I expect to port it to other platforms
later. So it is not really Ben-specific.
Writing a new kernel is a big job, so it must have great features that
make it worth it. It does, and IMO they're particularly suited for open
hardware. The main idea is that the user must be in control of what the
computer is doing. This is achieved by having a very limited set of
trusted programs (among which the kernel itself), which respond directly
to the user. All normal programs are started in a locked environment,
where they can only access resources that are given to them by trusted
programs (or, if they are given a communication channel, by each other).
They can't just read or write to any file. They can't just access the
network, the sound card, etc.
When starting a program, it must be given access to the things it needs.
All those channels for accessing things look alike. This means that it's
possible, for example, to write a fake driver, which writes everything
that passes through it to a log file as well. The communicating programs
cannot detect if this is happening. So they cannot change their
behaviour. This is great for debugging, because you know the program
will not be different when debugged.
Why is this so great for the Ben, or free hardware in general? Because
of the way drivers are implemented. They are normal programs, which talk
to other programs just like any other. The only difference of "real"
drivers is that they need to be able to access their hardware. But this
doesn't change the way they are communicating. And drivers without
hardware, like file system drivers, don't actually need any special
rights at all. This means that any user can run any file system (it
doesn't need to be installed in the "system", the executable can be in
the user's home directory) and use it. This includes a hacked version of
the file system. There is no danger to the system when this is done,
because no special rights are used. If a user is allowed to read and
write an image file, they can manually read files from it anyway. On
GNU/Linux you need e2tools for such things, and then you need to use
special commands (e2cp instead of cp, etc). On Iris, there is no
diffence between your own filesystem and a system-provided filesystem,
so everything is much easier. And it works when other programs are
trying to access it as well (where you can't change the "cp" command).
I particularly like the combination of this ease to write drivers and
custom hardware, which might be connected to the UBB, for example. On
Iris, there's no need to get into kernel programming to make a custom
project work. I've done some Linux kernel programming, and I very much
feel that it's a good idea to not require that from people.
> Or, since the new Nanonote may or may not have a
> similar chip, is it better to stay "on the surface" as it were and not
> get too involved in low-level stuff?
Yes. MIPS is appearantly a patent minefield, and you should not depend
on it. Iris uses assembly only for very few things (booting and
interrupt handling, mostly). When porting to a different architecture,
only those parts need to be rewritten. The rest should "just work" (but
it probably needs to be looked at anyway).
> Maybe the experience alone of deep
> MIPS knowledge will be worth it even if we move to another chip in the
Knowing MIPS is not very useful in itself, but knowing a CPU really well
is definitely a good thing. It doesn't really matter which, the point is
that you know how things work on that level. Every CPU has its own
special ways of doing things, but lots of things are the same for all.
 You, or anyone else, is very welcome to help with Iris! Please let
me know if you are interested.
 On Linux, you need kernel support for this. In many cases, this
support exists. But if you want to monitor your network, you need to run
wireshark as root. And you're always monitoring all network traffic, not
just from your own program.
More information about the discussion