The SOL support in IPMI v2.0 is believed to be based on the Intel implementation, so it may be possible to reverse engineer the implementation, but in the mean time you must use Intel's "dpccli / dpcproxy" programs to use the SOL functionality, unfortunately at the time of writing, these: But they are all that can be used at the moment, and they work well enough to be useful.Although the dpcproxy program must be spoken to using telnet (if you need any security at all on the LAN, then I recommend making the dpcproxy bind to the loopback interface only, and ), the SOL session itself (between dpcproxy, and the BMC) is encrypted by default (although Intel gives no details of the encryption), so passwords typed over the SOL session are (probably) not trivially interceptable. On the machine(s) from which you will manage the other servers: If you like, you can test the above configuration, using a real serial null-modem cable, and a terminal program such as "minicom", or "gkermit", in either case, you should then do the following: On the management machine (you could also do the first step on the target machine, using the Open IPMI interface) Note that the ipmitool "sol" command is likely to be renamed when IPMI v2.0 sol support is added to the program.This document describes how to setup Debian / Sarge to take advantage of the management features of the Intel SR2300, this chassis uses the Intel Server Board SE7501WV2, but nearly all of this is also relevant to other related Intel server motherboards (such as the SE7501BR2, and the SE7501HG2), a lot of it will be relevant to other boards which implement IPMI v1.5, or later.
Note that IPMI seems to have more than its fair share of TLAs.
It is useful to know a bit about how IPMI does its stuff - so I'll give an overview, and try to bust some weird IPMI/Intel jargon.
The IPMI standard allows for other interfaces as well.
The packages and tools that I used to gain access to IPMI functionality are: Note, that as far as I know, the IPMI device is most likely to end up at device major number 254, but that it will take devices from the 240-254 block, which according to linux/Documentation/is "Allocated for local/experimental use.
As previously mentioned, the Linux kernel, currently has a problem with RTS/CTS on serial consoles (although serial log-ins are unaffected, since the console output seems to be independant of the settings that the getty sets on the same port).
At the time of writing, the BMC code on the SE7501WV2 implements IPMI v1.5 - and IPMI v1.5 does not define SOL support, so the SOL implementation on these boards is proprietary, and Intel is not currently releasing details except under an NDA for some strange reason(boo, hiss).
Hence this document - the purpose of which is to allow Semantico staff to recreate the IPMI-based installation which I carried out during July, but which will hopefully be helpful to others as well.
The original motivation for setting up IPMI for me was to make use of Serial Over LAN - this allows you to deploy these servers in a remote location, make only power, and Ethernet connections to each server, and yet still get nearly all of the benefits of expensive KVM, or other remote control systems - such as those built around serial concentrators, with: IPMI stands for Intelligent Platform Management Interface and is an open standard for machine health, and control (including remote control), and is implemented by many hardware vendors - Intel is one of the originators, and early adopters of the standard.
Here are some useful things that IPMI can do on the SR2300 with Linux: If you would like to know more, then this document from the 2003 Linux Symposium provides more detail.