Discussion:
IBM 10/100 Mbps Ethernet Adapter (9-K) drivers for PS/2 available now!
(too old to reply)
Christian Holzapfel
2023-11-21 16:38:06 UTC
Permalink
Hear ye, Hear ye!
There's a new dream team on the block.
That may be you, your favorite PS/2 and your IBM 10/100 Mbps Ethernet Adapter (9-K)!
After weeks and months of reverse and forward engineering, we now have drivers for this very late manufactured, 32-bit Micro Channel 100 Mbps capable Ethernet Adapter type 9-K, code name "San Remo", that IBM released only for the RS/6000 series as one of the very last MCA cards.
For 30 years there were no drivers outside of AIX, IBM's proprietary Unix operating system.
Today you may start running it in Windows 95, probably also 98, and Linux 2.2!
Get your copy here:
http://www.holzapfel.biz/8F62/sanremo-win9x.zip
https://github.com/holzachr/sanremo-linux
Please note that those are considered beta drivers, and may not be perfect yet.
They were tested on Windows 95 B OSR2 and Debian 2.2 on a souped-up PC 750. I would love to hear your personal feedback on a true Micro Channel system.
This was a joint effort of Ryan Alswede and me.
Enjoy! šŸ˜€
Christian Holzapfel
2023-11-21 16:40:21 UTC
Permalink
And as our friend Shane pointed out:

FYI, thereā€™s a seller on eBay that has >200 of these in ā€œnew, bulkā€ condition for ~$60 shipped and shipping costs drop if you buy multiple cards.

https://www.ebay.com/itm/193693679438
IBMMuseum
2023-11-21 18:20:28 UTC
Permalink
...and shipping costs drop if you buy multiple cards.
I'm prompting for how much shipping "and handling" costs can be reduced for multiple adapters - Us Yanks could probably have four put in a USPS box shipped for far less than $140.
Pertti Helander
2023-12-04 11:56:09 UTC
Permalink
Post by Christian Holzapfel
FYI, thereā€™s a seller on eBay that has >200 of these in ā€œnew, bulkā€ condition for ~$60 shipped and shipping costs drop if you buy multiple cards.
https://www.ebay.com/itm/193693679438
Best 10/100 NIC for MCA PS/2 machines! And Christian has made drivers for it, thanks!
I would love to get it for my 9577 but sending costs from USA are too much for me.
Has someone in EU to sell these NICs! Because selling costs inside EU are more resonable
and without customs that collect VAT from outside EU sent items.
Wolfgang Gehl
2023-11-21 21:00:29 UTC
Permalink
Post by Christian Holzapfel
Hear ye, Hear ye!
There's a new dream team on the block.
That may be you, your favorite PS/2 and your IBM 10/100 Mbps Ethernet Adapter (9-K)!
After weeks and months of reverse and forward engineering, we now have drivers for this very late manufactured, 32-bit Micro Channel 100 Mbps capable Ethernet Adapter type 9-K, code name "San Remo", that IBM released only for the RS/6000 series as one of the very last MCA cards.
For 30 years there were no drivers outside of AIX, IBM's proprietary Unix operating system.
Today you may start running it in Windows 95, probably also 98, and Linux 2.2!
http://www.holzapfel.biz/8F62/sanremo-win9x.zip
https://github.com/holzachr/sanremo-linux
Please note that those are considered beta drivers, and may not be perfect yet.
They were tested on Windows 95 B OSR2 and Debian 2.2 on a souped-up PC 750. I would love to hear your personal feedback on a true Micro Channel system.
This was a joint effort of Ryan Alswede and me.
Enjoy! šŸ˜€
Cool stuff, will definitely try it out on a 9595 or server 520 and
report back. If only I had a little more time to play around with MCA ...

The problem with using a 9-K in an Intel system is the processor load.
On a 7030-3BT I have about 4.5MB throughput with a comparatively
sluggish AIXwindows. A 9595 will not achieve this. As soon as a GUI
comes into play, the 90MHz Pentium will go to its knees.

In any case, thank you for your work, which I greatly appreciate!

Wolfgang
IBMMuseum
2023-11-21 22:03:47 UTC
Permalink
I'm prompting for how much shipping "and handling" costs can be reduced for multiple adapters...
New link with "Make an offer": https://www.ebay.com/itm/196091827841

The seller seems to be willing to sell multiple adapters (from two up) for $20 each, and is negotiable on shipping using UPS (U.S.-based buyer) for far less costs than on the auction listing...
Ryan Alswede
2023-11-22 01:33:58 UTC
Permalink
Windows NT driver is in the finishing stages, will be available soon.
Ryan Alswede
2023-11-22 01:33:05 UTC
Permalink
. A 9595 will not achieve this. As soon as a GUI
Post by Wolfgang Gehl
comes into play, the 90MHz Pentium will go to its knees.
Why? It's a DMA enabled card.

Ryan
Kevin Bowling
2023-11-22 02:13:01 UTC
Permalink
Post by Wolfgang Gehl
. A 9595 will not achieve this. As soon as a GUI
Post by Wolfgang Gehl
comes into play, the 90MHz Pentium will go to its knees.
Why? It's a DMA enabled card.
Ryan
Interrupt rate - not sure if the card has batching or moderation
available (this would be on the PCI side) - that would help a lot.
Wolfgang Gehl
2023-12-02 17:06:23 UTC
Permalink
Post by Wolfgang Gehl
Post by Christian Holzapfel
Hear ye, Hear ye!
There's a new dream team on the block.
That may be you, your favorite PS/2 and your IBM 10/100 Mbps Ethernet Adapter (9-K)!
After weeks and months of reverse and forward engineering, we now have
drivers for this very late manufactured, 32-bit Micro Channel 100 Mbps
capable Ethernet Adapter type 9-K, code name "San Remo", that IBM
released only for the RS/6000 series as one of the very last MCA cards.
For 30 years there were no drivers outside of AIX, IBM's proprietary
Unix operating system.
Today you may start running it in Windows 95, probably also 98, and Linux 2.2!
http://www.holzapfel.biz/8F62/sanremo-win9x.zip
https://github.com/holzachr/sanremo-linux
Please note that those are considered beta drivers, and may not be perfect yet.
They were tested on Windows 95 B OSR2 and Debian 2.2 on a souped-up PC
750. I would love to hear your personal feedback on a true Micro
Channel system.
This was a joint effort of Ryan Alswede and me.
Enjoy! šŸ˜€
Cool stuff, will definitely try it out on a 9595 or server 520 and
report back. If only I had a little more time to play around with MCA ...
The problem with using a 9-K in an Intel system is the processor load.
On a 7030-3BT I have about 4.5MB throughput with a comparatively
sluggish AIXwindows. A 9595 will not achieve this. As soon as a GUI
comes into play, the 90MHz Pentium will go to its knees.
In any case, thank you for your work, which I greatly appreciate!
Wolfgang
Sorry for coming late to the show. Here are the results for a 9595. The
test runs were performed five times and averaged.

Model 9595 (Server 95)
Pentium 90 MHz
256 MB RAM
Windows 95C
sanremo driver first release
Netio 1.32

Packet size 1k bytes: 2003.55 KByte/s Tx, 2474,76 KByte/s Rx.
Packet size 2k bytes: 2842,76 KByte/s Tx, 2988,59 KByte/s Rx.
Packet size 4k bytes: 3416,27 KByte/s Tx, 3265,17 KByte/s Rx.
Packet size 8k bytes: 4255,07 KByte/s Tx, 3328,77 KByte/s Rx.
Packet size 16k bytes: 4730,52 KByte/s Tx, 3286,24 KByte/s Rx.
Packet size 32k bytes: 4570,87 KByte/s Tx, 3137,11 KByte/s Rx.

With your driver, the 9-K is three to four times faster than the
Etherstreamer MC32, which was up to now the fastest Ethernet adapter for
the IBM PS/2. The system remains responsive even if Sysinternal's
process explorer claims 100% CPU utilization.

Well done!

Will try your Linux driver with Slackware 11 and the second release of
your W95 driver as soon as possible.

Wolfgang
Louis Ohland
2023-12-02 19:05:34 UTC
Permalink
Well, there are different amounts of RAM.

What effect does adding RAM have to performance? Where is the sweet spot?
Post by Wolfgang Gehl
256 MB RAM
Louis Ohland
2023-12-02 19:23:17 UTC
Permalink
Seems to be an imbalance in the Rogaine flow.

Especially 8K - 32K, 1K-1.5K difference.

Why?
Post by Wolfgang Gehl
Model 9595 (Server 95)
Pentium 90 MHz
256 MB RAM
Windows 95C
sanremo driver first release
Netio 1.32
Packet sizeĀ  1k bytes:Ā  2003.55 KByte/s Tx,Ā  2474,76 KByte/s Rx.
Packet sizeĀ  2k bytes:Ā  2842,76 KByte/s Tx,Ā  2988,59 KByte/s Rx.
Packet sizeĀ  4k bytes:Ā  3416,27 KByte/s Tx,Ā  3265,17 KByte/s Rx.
Packet sizeĀ  8k bytes:Ā  4255,07 KByte/s Tx,Ā  3328,77 KByte/s Rx.
Packet size 16k bytes:Ā  4730,52 KByte/s Tx,Ā  3286,24 KByte/s Rx.
Packet size 32k bytes:Ā  4570,87 KByte/s Tx,Ā  3137,11 KByte/s Rx.
Louis Ohland
2023-12-02 19:25:14 UTC
Permalink
Seems to be an imbalance in the Rogaine flow.

Especially 8K - 32K, 1MB - 1.5MB difference.

Why?
Post by Wolfgang Gehl
Model 9595 (Server 95)
Pentium 90 MHz
256 MB RAM
Windows 95C
sanremo driver first release
Netio 1.32
Packet size 1k bytes: 2003.55 KByte/s Tx, 2474,76 KByte/s Rx.
Packet size 2k bytes: 2842,76 KByte/s Tx, 2988,59 KByte/s Rx.
Packet size 4k bytes: 3416,27 KByte/s Tx, 3265,17 KByte/s Rx.
Packet size 8k bytes: 4255,07 KByte/s Tx, 3328,77 KByte/s Rx.
Packet size 16k bytes: 4730,52 KByte/s Tx, 3286,24 KByte/s Rx.
Packet size 32k bytes: 4570,87 KByte/s Tx, 3137,11 KByte/s Rx.
Model 9595 (Server 95)
Pentium 90 MHz
256 MB RAM
Windows 95C
sanremo driver first release
Netio 1.32
Packet sizeĀ  1k bytes:Ā  2003.55 KByte/s Tx,Ā  2474,76 KByte/s Rx.
Packet sizeĀ  2k bytes:Ā  2842,76 KByte/s Tx,Ā  2988,59 KByte/s Rx.
Packet sizeĀ  4k bytes:Ā  3416,27 KByte/s Tx,Ā  3265,17 KByte/s Rx.
Packet sizeĀ  8k bytes:Ā  4255,07 KByte/s Tx,Ā  3328,77 KByte/s Rx.
Packet size 16k bytes:Ā  4730,52 KByte/s Tx,Ā  3286,24 KByte/s Rx.
Packet size 32k bytes:Ā  4570,87 KByte/s Tx,Ā  3137,11 KByte/s Rx.
With your driver, the 9-K is three to four times faster than the
Etherstreamer MC32, which was up to now the fastest Ethernet adapter for
the IBM PS/2. The system remains responsive even if Sysinternal's
process explorer claims 100% CPU utilization.
Well done!
Will try your Linux driver with Slackware 11 and the second release of
your W95 driver as soon as possible.
Wolfgang
Christian Holzapfel
2023-12-03 12:04:17 UTC
Permalink
Post by Louis Ohland
Seems to be an imbalance in the Rogaine flow.
Especially 8K - 32K, 1MB - 1.5MB difference.
Why?
That must be Kevin's interrupt rate.
For the same total data rate, many small packets are more difficult than a few larger ones.

What's also interesting is Lionel's 486-era Pentium Overdrive 83 versus Wolfang's "real" Pentium 90 - what a huge gap.
Pretty much like comparing the RS6k workstation and server.
So MHz is not everything (but it surely helps...).

Thank you guys for testing. I'm glad it works out for you.
Wolfgang Gehl
2024-01-15 23:26:06 UTC
Permalink
Post by Wolfgang Gehl
Cool stuff, will definitely try it out on a 9595 or server 520 and
report back. If only I had a little more time to play around with MCA ...
I finally found time to install a larger hard drive (16GB) in my 9595
and install Slackware 11 (kernel 2.4.33.3).

Unfortunately I have a problem with the sanremo.patch. This is what
happened:

/usr/src/linux# patch -p0 < sanremo.patch
patching file Documentation/Configure.help
Hunk #1 succeeded at 12863 with fuzz 2 (offset 6340 lines).
patching file drivers/net/Config.in
Hunk #1 FAILED at 121.
1 out of 1 hunk FAILED -- saving rejects to file drivers/net/Config.in.rej
patching file drivers/net/Makefile
Hunk #1 FAILED at 410.
1 out of 1 hunk FAILED -- saving rejects to file drivers/net/Makefile.rej
patching file drivers/net/Space.c
Hunk #1 FAILED at 50.
Hunk #2 succeeded at 205 (offset -105 lines).
1 out of 2 hunks FAILED -- saving rejects to file drivers/net/Space.c.rej



file Config.in.rej

***************
*** 121,126 ****
if [ "$CONFIG_MCA" = "y" ]; then
tristate 'NE/2 (ne2000 MCA version) support' CONFIG_NE2_MCA
tristate 'SKnet MCA support' CONFIG_SKMC
fi
bool 'EISA, VLB, PCI and on board controllers' CONFIG_NET_EISA
if [ "$CONFIG_NET_EISA" = "y" ]; then
--- 121,127 ----
if [ "$CONFIG_MCA" = "y" ]; then
tristate 'NE/2 (ne2000 MCA version) support' CONFIG_NE2_MCA
tristate 'SKnet MCA support' CONFIG_SKMC
+ tristate 'IBM MCA 10/100 Mbps Ethernet (9-K)' CONFIG_SANREMO
fi
bool 'EISA, VLB, PCI and on board controllers' CONFIG_NET_EISA
if [ "$CONFIG_NET_EISA" = "y" ]; then



file Makefile.rej

***************
*** 410,415 ****
endif
endif

ifeq ($(CONFIG_DEFXX),y)
L_OBJS += defxx.o
endif
--- 410,423 ----
endif
endif

+ ifeq ($(CONFIG_SANREMO),y)
+ L_OBJS += sanremo.o
+ else
+ ifeq ($(CONFIG_SANREMO),m)
+ M_OBJS += sanremo.o
+ endif
+ endif
+
ifeq ($(CONFIG_DEFXX),y)
L_OBJS += defxx.o
endif



file Space.c.rej

***************
*** 58,63 ****
extern int el3_probe(struct device *);
extern int at1500_probe(struct device *);
extern int pcnet32_probe(struct device *);
extern int at1700_probe(struct device *);
extern int fmv18x_probe(struct device *);
extern int eth16i_probe(struct device *);
--- 58,64 ----
extern int el3_probe(struct device *);
extern int at1500_probe(struct device *);
extern int pcnet32_probe(struct device *);
+ extern int sanremo_probe(struct device *);
extern int at1700_probe(struct device *);
extern int fmv18x_probe(struct device *);
extern int eth16i_probe(struct device *);


Looks like I need help. Is there a solution to this or do I have to go
back to Slackware 8 (Kernel 2.2.19)?

Wolfgang
Christian Holzapfel
2024-01-16 08:20:06 UTC
Permalink
Post by Wolfgang Gehl
Looks like I need help. Is there a solution to this or do I have to go
back to Slackware 8 (Kernel 2.2.19)?
Wolfgang
The patch and C-file won't work with a 2.4 Kernel out of the box.
I already started porting the sanremo.c to 2.4, but it's not final yet, I have no patch and corrupted my 2.4 Linux partition >.<
I can send it to you for further testing. It's not fully cleaned up, but should compile and give a connection.

Interestingly, the 2.4 Kernel seems to tackle some performance issues: It now seems to hand the network subsystem buffers straight down to the card for DMA.
Seems to only profit in one direction, and degrade in the other.

This is what I measured on an 8595 with Pentium 200, Kernel 2.2:

NETIO - Network Throughput Benchmark, Version 1.7
(C) 1997-1999 Kai Uwe Rommel

TCP/IP connection established.
1k packets: 6726 k/sec
2k packets: 8456 k/sec
4k packets: 8741 k/sec
8k packets: 8680 k/sec
16k packets: 8586 k/sec
32k packets: 7974 k/sec

NETIO - Network Throughput Benchmark, Version 1.7
(C) 1997-1999 Kai Uwe Rommel

TCP/IP connection established.
1k packets: 6196 k/sec
2k packets: 6175 k/sec
4k packets: 6237 k/sec
8k packets: 6250 k/sec
16k packets: 6240 k/sec
32k packets: 6172 k/sec

And on the same system with a Kernel 2.4:

NETIO - Network Throughput Benchmark, Version 1.7
(C) 1997-1999 Kai Uwe Rommel

TCP/IP connection established.
1k packets: 7415 k/sec
2k packets: 7485 k/sec
4k packets: 7809 k/sec
8k packets: 7793 k/sec
16k packets: 7689 k/sec
32k packets: 7199 k/sec

NETIO - Network Throughput Benchmark, Version 1.7
(C) 1997-1999 Kai Uwe Rommel

TCP/IP connection established.
1k packets: 6949 k/sec
2k packets: 6900 k/sec
4k packets: 6903 k/sec
8k packets: 6900 k/sec
16k packets: 6910 k/sec
32k packets: 6886 k/sec
Wolfgang Gehl
2024-02-25 17:20:24 UTC
Permalink
Hmmm, very silent here. I hope I'm not the last man standing.

After a lot of fiddling with the kernel source code I have found a way
to establish a reliable and persistent network connection under
Slackware 11, Kernel 2.4.33.3. The solution to the puzzle here was to
build the network driver as a module, to load the module via
/etc/rc.d/rc.netdevice and assign a static ip configuration via
/etc/rc.d/rc.inet1.conf, /etc/resolv.conf and /etc/hosts, ymmv.

Involved was a 9595A, P90, 256MB RAM, now with a 10/100 network link. A
big thank you to all who were involved in the driver development and
especially to Christian, who sent driver patches and was patient with me.

Here are the netio 1.30 client results.

KB/s Tx KB/s Rx
1KB Paket 6567,01 5229,21
2KB Paket 7499,88 5307,07
4KB Paket 7623,93 5288,49
8KB Paket 7545,33 5315,38
16KB Paket 7151,32 5326,32
32KB Paket 7668,42 5256,43

Wolfgang
Post by Christian Holzapfel
Post by Wolfgang Gehl
Looks like I need help. Is there a solution to this or do I have to go
back to Slackware 8 (Kernel 2.2.19)?
Wolfgang
The patch and C-file won't work with a 2.4 Kernel out of the box.
I already started porting the sanremo.c to 2.4, but it's not final yet, I have no patch and corrupted my 2.4 Linux partition >.<
I can send it to you for further testing. It's not fully cleaned up, but should compile and give a connection.
Interestingly, the 2.4 Kernel seems to tackle some performance issues: It now seems to hand the network subsystem buffers straight down to the card for DMA.
Seems to only profit in one direction, and degrade in the other.
NETIO - Network Throughput Benchmark, Version 1.7
(C) 1997-1999 Kai Uwe Rommel
TCP/IP connection established.
1k packets: 6726 k/sec
2k packets: 8456 k/sec
4k packets: 8741 k/sec
8k packets: 8680 k/sec
16k packets: 8586 k/sec
32k packets: 7974 k/sec
NETIO - Network Throughput Benchmark, Version 1.7
(C) 1997-1999 Kai Uwe Rommel
TCP/IP connection established.
1k packets: 6196 k/sec
2k packets: 6175 k/sec
4k packets: 6237 k/sec
8k packets: 6250 k/sec
16k packets: 6240 k/sec
32k packets: 6172 k/sec
NETIO - Network Throughput Benchmark, Version 1.7
(C) 1997-1999 Kai Uwe Rommel
TCP/IP connection established.
1k packets: 7415 k/sec
2k packets: 7485 k/sec
4k packets: 7809 k/sec
8k packets: 7793 k/sec
16k packets: 7689 k/sec
32k packets: 7199 k/sec
NETIO - Network Throughput Benchmark, Version 1.7
(C) 1997-1999 Kai Uwe Rommel
TCP/IP connection established.
1k packets: 6949 k/sec
2k packets: 6900 k/sec
4k packets: 6903 k/sec
8k packets: 6900 k/sec
16k packets: 6910 k/sec
32k packets: 6886 k/sec
holzachr
2024-02-26 10:53:26 UTC
Permalink
Still reading and working on it, slowly but thoroughly ;-)

Thank you for your time and testing - glad it finally works.

I added the kernel 2.4 files to my GitHub repo:
https://github.com/holzachr/sanremo-linux/tree/Kernel-2.4.18

Tomas received a separate copy too.

lharr...@gmail.com
2023-11-26 22:50:42 UTC
Permalink
Post by Christian Holzapfel
Hear ye, Hear ye!
There's a new dream team on the block.
That may be you, your favorite PS/2 and your IBM 10/100 Mbps Ethernet Adapter (9-K)!
After weeks and months of reverse and forward engineering, we now have drivers for this very late manufactured, 32-bit Micro Channel 100 Mbps capable Ethernet Adapter type 9-K, code name "San Remo", that IBM released only for the RS/6000 series as one of the very last MCA cards.
For 30 years there were no drivers outside of AIX, IBM's proprietary Unix operating system.
Today you may start running it in Windows 95, probably also 98, and Linux 2.2!
http://www.holzapfel.biz/8F62/sanremo-win9x.zip
https://github.com/holzachr/sanremo-linux
Please note that those are considered beta drivers, and may not be perfect yet.
They were tested on Windows 95 B OSR2 and Debian 2.2 on a souped-up PC 750. I would love to hear your personal feedback on a true Micro Channel system.
This was a joint effort of Ryan Alswede and me.
Enjoy! šŸ˜€
I bought one and will give it a try on a Reply TurboProcessor. Nice work!

-Lionel
schimmi
2023-11-28 00:32:52 UTC
Permalink
Post by Christian Holzapfel
Hear ye, Hear ye!
There's a new dream team on the block.
That may be you, your favorite PS/2 and your IBM 10/100 Mbps Ethernet Adapter (9-K)!
After weeks and months of reverse and forward engineering, we now have drivers for this very late manufactured, 32-bit Micro Channel 100 Mbps capable Ethernet Adapter type 9-K, code name "San Remo", that IBM released only for the RS/6000 series as one of the very last MCA cards.
For 30 years there were no drivers outside of AIX, IBM's proprietary Unix operating system.
Today you may start running it in Windows 95, probably also 98, and Linux 2.2!
http://www.holzapfel.biz/8F62/sanremo-win9x.zip
https://github.com/holzachr/sanremo-linux
Please note that those are considered beta drivers, and may not be perfect yet.
They were tested on Windows 95 B OSR2 and Debian 2.2 on a souped-up PC 750. I would love to hear your personal feedback on a true Micro Channel system.
This was a joint effort of Ryan Alswede and me.
Enjoy! šŸ˜€
I bought one and will give it a try on a Reply TurboProcessor. Nice work!
-Lionel
Nice, thank you, Christian!
Christian Holzapfel
2023-11-29 19:33:48 UTC
Permalink
I hope it works for all of you without any headaches.
If you would like to benchmark it (honestly, I would like you to benchmark it!), I've created a zip file containing matching versions of the netio benchmark for Win32, Linux, OS/2 and AIX that should enable most of us testing the adapters in our favorite environments:

http://www.holzapfel.biz/8F62/netio132-ibm.zip

Just start one executable on a connected powerful computer with the "-s" parameter to start a server, then run the executable on the Micro Channel machine using the "-t <server-ip>" command line argument.
lharr...@gmail.com
2023-11-29 23:13:12 UTC
Permalink
Post by Christian Holzapfel
I hope it works for all of you without any headaches.
http://www.holzapfel.biz/8F62/netio132-ibm.zip
Just start one executable on a connected powerful computer with the "-s" parameter to start a server, then run the executable on the Micro Channel machine using the "-t <server-ip>" command line argument.
What are the chances of a DOS driver?
lharr...@gmail.com
2023-11-30 01:23:13 UTC
Permalink
Post by ***@gmail.com
Post by Christian Holzapfel
I hope it works for all of you without any headaches.
http://www.holzapfel.biz/8F62/netio132-ibm.zip
Just start one executable on a connected powerful computer with the "-s" parameter to start a server, then run the executable on the Micro Channel machine using the "-t <server-ip>" command line argument.
What are the chances of a DOS driver?
Received the card today, pretty good! Results using provided netio132 below:

System:
--------------------------------------------------------------------------------------------------
Reply TurboProcessor 60/65/80
Intel 83Mhz Pentium OverDrive
64MB RAM + 128KB L2 Cache (IDT7MB6098/A/SA33K)
BusLogic BT-646 / SDC3211F
Post by ***@gmail.com
ZuluSCSI RP2040
Nakamichi MJ-5.16
Gotek Floppy Drive 435 MCU w/ Rotary Encoder OLED
ChipChat 16



3C529-TP Etherlink III
--------------------------------------------------------------------------------------------------
TCP server listening.
TCP connection established ...
Receiving from client, packet size 1k ... 928.40 KByte/s
Sending to client, packet size 1k ... 804.07 KByte/s
Receiving from client, packet size 2k ... 942.55 KByte/s
Sending to client, packet size 2k ... 832.55 KByte/s
Receiving from client, packet size 4k ... 873.69 KByte/s
Sending to client, packet size 4k ... 832.20 KByte/s
Receiving from client, packet size 8k ... 814.47 KByte/s
Sending to client, packet size 8k ... 844.32 KByte/s
Receiving from client, packet size 16k ... 832.46 KByte/s
Sending to client, packet size 16k ... 840.34 KByte/s
Receiving from client, packet size 32k ... 830.54 KByte/s
Sending to client, packet size 32k ... 787.97 KByte/s
Done.

10/100 Mbps Ethernet (9-K)
--------------------------------------------------------------------------------------------------
TCP server listening.
TCP connection established ...
Receiving from client, packet size 1k ... 1149.31 KByte/s
Sending to client, packet size 1k ... 1410.54 KByte/s
Receiving from client, packet size 2k ... 1381.70 KByte/s
Sending to client, packet size 2k ... 1651.40 KByte/s
Receiving from client, packet size 4k ... 1904.52 KByte/s
Sending to client, packet size 4k ... 1810.20 KByte/s
Receiving from client, packet size 8k ... 1971.34 KByte/s
Sending to client, packet size 8k ... 1892.09 KByte/s
Receiving from client, packet size 16k ... 2193.67 KByte/s
Sending to client, packet size 16k ... 1944.85 KByte/s
Receiving from client, packet size 32k ... 2267.23 KByte/s
Sending to client, packet size 32k ... 1952.12 KByte/s
Done.
Louis Ohland
2023-11-30 02:05:58 UTC
Permalink
Gods below, that looks like the 9-K is doing FDX with the 10Mb section.
Post by ***@gmail.com
3C529-TP Etherlink III
--------------------------------------------------------------------------------------------------
TCP server listening.
TCP connection established ...
Receiving from client, packet size 1k ... 928.40 KByte/s
Sending to client, packet size 1k ... 804.07 KByte/s
Receiving from client, packet size 2k ... 942.55 KByte/s
Sending to client, packet size 2k ... 832.55 KByte/s
Receiving from client, packet size 4k ... 873.69 KByte/s
Sending to client, packet size 4k ... 832.20 KByte/s
Receiving from client, packet size 8k ... 814.47 KByte/s
Sending to client, packet size 8k ... 844.32 KByte/s
Receiving from client, packet size 16k ... 832.46 KByte/s
Sending to client, packet size 16k ... 840.34 KByte/s
Receiving from client, packet size 32k ... 830.54 KByte/s
Sending to client, packet size 32k ... 787.97 KByte/s
Done.
10/100 Mbps Ethernet (9-K)
--------------------------------------------------------------------------------------------------
TCP server listening.
TCP connection established ...
Receiving from client, packet size 1k ... 1149.31 KByte/s
Sending to client, packet size 1k ... 1410.54 KByte/s
Receiving from client, packet size 2k ... 1381.70 KByte/s
Sending to client, packet size 2k ... 1651.40 KByte/s
Receiving from client, packet size 4k ... 1904.52 KByte/s
Sending to client, packet size 4k ... 1810.20 KByte/s
Receiving from client, packet size 8k ... 1971.34 KByte/s
Sending to client, packet size 8k ... 1892.09 KByte/s
Receiving from client, packet size 16k ... 2193.67 KByte/s
Sending to client, packet size 16k ... 1944.85 KByte/s
Receiving from client, packet size 32k ... 2267.23 KByte/s
Sending to client, packet size 32k ... 1952.12 KByte/s
Done.
lharr...@gmail.com
2023-11-30 05:09:59 UTC
Permalink
Post by Louis Ohland
Gods below, that looks like the 9-K is doing FDX with the 10Mb section.
Post by ***@gmail.com
3C529-TP Etherlink III
--------------------------------------------------------------------------------------------------
TCP server listening.
TCP connection established ...
Receiving from client, packet size 1k ... 928.40 KByte/s
Sending to client, packet size 1k ... 804.07 KByte/s
Receiving from client, packet size 2k ... 942.55 KByte/s
Sending to client, packet size 2k ... 832.55 KByte/s
Receiving from client, packet size 4k ... 873.69 KByte/s
Sending to client, packet size 4k ... 832.20 KByte/s
Receiving from client, packet size 8k ... 814.47 KByte/s
Sending to client, packet size 8k ... 844.32 KByte/s
Receiving from client, packet size 16k ... 832.46 KByte/s
Sending to client, packet size 16k ... 840.34 KByte/s
Receiving from client, packet size 32k ... 830.54 KByte/s
Sending to client, packet size 32k ... 787.97 KByte/s
Done.
10/100 Mbps Ethernet (9-K)
--------------------------------------------------------------------------------------------------
TCP server listening.
TCP connection established ...
Receiving from client, packet size 1k ... 1149.31 KByte/s
Sending to client, packet size 1k ... 1410.54 KByte/s
Receiving from client, packet size 2k ... 1381.70 KByte/s
Sending to client, packet size 2k ... 1651.40 KByte/s
Receiving from client, packet size 4k ... 1904.52 KByte/s
Sending to client, packet size 4k ... 1810.20 KByte/s
Receiving from client, packet size 8k ... 1971.34 KByte/s
Sending to client, packet size 8k ... 1892.09 KByte/s
Receiving from client, packet size 16k ... 2193.67 KByte/s
Sending to client, packet size 16k ... 1944.85 KByte/s
Receiving from client, packet size 32k ... 2267.23 KByte/s
Sending to client, packet size 32k ... 1952.12 KByte/s
Done.
So my switch shows 100MB FDX, I did the test again with sysmon showing CPU usage in Windows 95 and it's 100% CPU with the 9-K. I am pretty sure I'm just maxing the CPU at this point in terms of top end speeds.

For some reason the Reply TurboProcessor is a slow board despite the Syncrostream controller. A Pentium OverDrive at 83Mhz on a TurboProcessor 60/80 is about 20% slower than most online benchmarks of the same chip or any rando 486 board and even my own IBM PS/Valuepoint 433SX/X (Type 6382), http://ps-2.kev009.com/pcpartnerinfo/ctstips/3aa2.htm, which just stomps the shit out of the Reply TurboProcessor in terms of performance. I've never used the benchmark tool Chirstian provided, maybe this weekend I'll compare it against the Valuepoint which has a 100mb 3com ISA card.
Christian Holzapfel
2023-11-30 08:44:10 UTC
Permalink
So my switch shows 100MB FDX, I did the test again with sysmon showing CPU usage in Windows 95 and it's 100% CPU with the 9-K. I am pretty sure I'm just maxing the CPU at this point in terms of top end speeds.
Any chance you measured the CPU usage when benchmarking the 3com card?

Our glorious tool https://ardent-tool.com/NIC/Ethernet_Bench.html#Results indicates 28.1 % load with the 3C529 on a 486SX-33, doing around 950 KB/s (under Linux!).

Could be something is not right yet with the interrupt handling...
Christian Holzapfel
2023-11-30 12:35:47 UTC
Permalink
Post by Christian Holzapfel
Could be something is not right yet with the interrupt handling...
Another conclusion from Alfred Arnold, https://ardent-tool.com/NIC/Ethernet_Bench.html#Results :

"Another result is that the busmastering 3C527 produces significantly more load than other, non-busmastering boards! [..] Another reason might be that the driver does not exploit the board's bus mastering capabilities, i.e. the received frames are written to buffers kept by the driver, and the driver then copies the data into the operating system's buffers. This might sound awkward (and is is in fact, since it voids the advantages of busmaster operation), but it is sometimes unavoidable, either due to the way the kernel interface works or buffer alignment constraints... this shouldn't cover the fact that the 3C527 delivers good performance and deserves the designation 'High Performance Adapter'."

Yes, I can say, from the original driver design for the busmastering PCnet (PCI) card that I adapted for our 9-K, both the Windows and Linux drivers do work this way: Allocating buffers within the driver that the busmastering 9-K reads from and writes to (using burst mode DMA!), but in the end, it's again the CPU that has to copy the data from the driver's receive buffer into the operating system's network stack.
I wonder if this works the same under the 9-K's original AIX driver, or if they have direct DMA access to the network subsystem.
lharr...@gmail.com
2023-11-30 15:33:41 UTC
Permalink
Post by Christian Holzapfel
So my switch shows 100MB FDX, I did the test again with sysmon showing CPU usage in Windows 95 and it's 100% CPU with the 9-K. I am pretty sure I'm just maxing the CPU at this point in terms of top end speeds.
Any chance you measured the CPU usage when benchmarking the 3com card?
Our glorious tool https://ardent-tool.com/NIC/Ethernet_Bench.html#Results indicates 28.1 % load with the 3C529 on a 486SX-33, doing around 950 KB/s (under Linux!).
Could be something is not right yet with the interrupt handling...
I'll give it a try tonight! I only did a quick and dirty google on how to check CPU usage in Windows 95, I'm open to suggestions on the best way to monitor it or if there is a way to get more detail like interrupts.

I will say this, I copied some files over the network and it was noticeably faster. The current disk subsystem taps out around 7MB/sec as the BusLogic card and the ZuluSCSI is that fastest setup I've found for the Reply Turboboard as the Corvette turbo I found wouldn't configure, perhaps it's defective or not compatible. I know a lot of these later cards have multiple FRU's that stray into an area where they simply don't work with PS/2 stuff, much like this card used to be. Also, pretty sure there is a thread from the late, great, WBSTClarke saying it was a no go on the Reply boards anyway.

Anyway, maybe I'll do some timed testing coping all my doom.wad's over the network... Just for fun. Perhaps it's just interrupt spam as the system doesn't appear sluggish during the copy. It's not one of those things where Windows starts grinding to a halt during the copy.

For the PS/Valuepoint, I have to get it reassembled and an OS re-installed. Comparative testing with ISA architecture and the same processor might be a bit off, maybe this weekend or next. I'm not sure what value it would provide anyway, let alone a 100mbit card on the slow ISA bus? What? There's something to be said for the MCA bus, I once saw a 40MB/sec read speed result during a benchmark (small block size) that was a 100% cache hit on the BusLogic card itself and not the SCSI bus... made my jaw drop. I just wish the POD83 didn't benchmark at like 40 on Speedsys.

As far as other thoughts about MicroChannel LAN cards. About a year ago I was using the LAN Adapter/A because of the results of the benchmarking on that page, but that was all done in Linux and I am a Windows guy. I switched to the 3com card because it had drivers for enhanced mode (VxD), though I never really got around to testing it to see if it mattered.
Christian Holzapfel
2023-12-01 10:47:25 UTC
Permalink
I have silently updated the Win9x driver at

http://www.holzapfel.biz/8F62/sanremo-win9x.zip

No big deal, I corrected the adapter's name and added a menu in Adapter Settings that lets you choose the DMA burst mode between "Rx+Tx" (default) and "Tx only".
Just something for you to try out, performance related.
Note that in Network Settings, the driver needs to be removed, then the driver re-installed, settings changed and then rebootet to apply.

BTW, the card does not work at all without burst mode for Rx. Why this is I can't say, probably because the ASIC does not support a non-bursting mode in that direction at all.
Christian Holzapfel
2023-12-01 10:49:17 UTC
Permalink
I also revised and debugged the IRQ handling code, it does not seem to me like the card is spamming more interrupts than needed.
No erroneous data IRQs, no error IRQs. Only regular Rx and Tx.
CPU load is still at 100 % on my test system.
I use TuneUp 97 which has a nice system load view built in.
Christian Holzapfel
2023-12-01 11:07:58 UTC
Permalink
Here a few benchmarks from my systems:

Model 6886 (PC 750)
AMD K6-III @ 400 MHz
192 MB RAM
Windows 95
Burst Mode Rx+Tx
Netio 1.32

Packet size 1k bytes: 4751.55 KByte/s Tx, 4547.97 KByte/s Rx.
Packet size 2k bytes: 6082.19 KByte/s Tx, 4748.86 KByte/s Rx.
Packet size 4k bytes: 6772.06 KByte/s Tx, 5595.94 KByte/s Rx.
Packet size 8k bytes: 7378.91 KByte/s Tx, 5575.82 KByte/s Rx.
Packet size 16k bytes: 7996.92 KByte/s Tx, 6135.80 KByte/s Rx.
Packet size 32k bytes: 8226.20 KByte/s Tx, 6186.44 KByte/s Rx.



Model 9576i (Lacuna)
AMD X5 @ 133 MHz
64 MB RAM
No L2 cache :-(
Windows 95
Burst Mode Rx+Tx
Netio 1.32

Packet size 1k bytes: 1193.33 KByte/s Tx, 936.21 KByte/s Rx.
Packet size 2k bytes: 1290.28 KByte/s Tx, 1114.61 KByte/s Rx.
Packet size 4k bytes: 1389.92 KByte/s Tx, 1575.54 KByte/s Rx.
Packet size 8k bytes: 1741.82 KByte/s Tx, 1541.64 KByte/s Rx.
Packet size 16k bytes: 1969.17 KByte/s Tx, 1173.42 KByte/s Rx.
Packet size 32k bytes: 1934.02 KByte/s Tx, 1874.79 KByte/s Rx.



Model 7013-59H (RS/6000)
POWER2 @ 67 MHz
1,25 GB RAM
1 MB L2 cache
AIX 4.3.3 (IBM's driver)
Netio 1.32

Packet size 1k bytes: 5113.53 KByte/s Tx, 6395.88 KByte/s Rx.
Packet size 2k bytes: 6059.71 KByte/s Tx, 7318.23 KByte/s Rx.
Packet size 4k bytes: 7213.85 KByte/s Tx, 7667.44 KByte/s Rx.
Packet size 8k bytes: 7877.74 KByte/s Tx, 8501.13 KByte/s Rx.
Packet size 16k bytes: 8819.90 KByte/s Tx, 8811.09 KByte/s Rx.
Packet size 32k bytes: 8627.93 KByte/s Tx, 9067.06 KByte/s Rx.



Model 7006-42T (RS/6000)
PowerPC @ 120 MHz
192 MB RAM
0.5 MB L2 cache
AIX 4.3.3 (IBM's driver)
Netio 1.32

Packet size 1k bytes: 2558.84 KByte/s Tx, 619.03 KByte/s Rx.
Packet size 2k bytes: 2754.74 KByte/s Tx, 1297.03 KByte/s Rx.
Packet size 4k bytes: 3340.64 KByte/s Tx, 3816.94 KByte/s Rx.
Packet size 8k bytes: 4013.11 KByte/s Tx, 3763.10 KByte/s Rx.
Packet size 16k bytes: 4551.11 KByte/s Tx, 3802.62 KByte/s Rx.
Packet size 32k bytes: 4562.02 KByte/s Tx, 4275.55 KByte/s Rx.


What's interesting is the mixed performance on the RS/6000 systems. The later mid-range workstation machine with a faster CPU performs worse than the early 1992 high-performance server class system. I assume, the 7013 was built specifically for high-throughput and low-latency applications.

That the Lacuna and Reply perform so low is disappointing, but maybe there's a reason why IBM did not sell this adapter to PS/2 users - or maybe we will find a magic switch to make the card operate faster.
Ryan Alswede
2023-12-01 18:55:33 UTC
Permalink
Post by Christian Holzapfel
I assume, the 7013 was built specifically for high-throughput and low-latency applications.
I would dive into this machine and see what the ASIC values are with your break out board. Especially the pci command register.
Louis Ohland
2023-12-01 21:55:03 UTC
Permalink
The Lacuna has a BIU, not a SSC.

Yeah, the Reply may have an actual SSC, but how much did IBM "tweak"
it?Sordid reminds me of the T/R chipsets that IBM fabbed for third party
manufacturers.

How about a little M or Type 4 hot n heavy action?
Post by Christian Holzapfel
Model 9576i (Lacuna)
64 MB RAM
No L2 cache:-(
Windows 95
Burst Mode Rx+Tx
Netio 1.32
Packet size 1k bytes: 1193.33 KByte/s Tx, 936.21 KByte/s Rx.
Packet size 2k bytes: 1290.28 KByte/s Tx, 1114.61 KByte/s Rx.
Packet size 4k bytes: 1389.92 KByte/s Tx, 1575.54 KByte/s Rx.
Packet size 8k bytes: 1741.82 KByte/s Tx, 1541.64 KByte/s Rx.
Packet size 16k bytes: 1969.17 KByte/s Tx, 1173.42 KByte/s Rx.
Packet size 32k bytes: 1934.02 KByte/s Tx, 1874.79 KByte/s Rx.
That the Lacuna and Reply perform so low is disappointing, but maybe there's a reason why IBM did not sell this adapter to PS/2 users - or maybe we will find a magic switch to make the card operate faster.
Kevin Bowling
2023-12-01 21:55:27 UTC
Permalink
Post by Christian Holzapfel
Model 6886 (PC 750)
192 MB RAM
Windows 95
Burst Mode Rx+Tx
Netio 1.32
Packet size 1k bytes: 4751.55 KByte/s Tx, 4547.97 KByte/s Rx.
Packet size 2k bytes: 6082.19 KByte/s Tx, 4748.86 KByte/s Rx.
Packet size 4k bytes: 6772.06 KByte/s Tx, 5595.94 KByte/s Rx.
Packet size 8k bytes: 7378.91 KByte/s Tx, 5575.82 KByte/s Rx.
Packet size 16k bytes: 7996.92 KByte/s Tx, 6135.80 KByte/s Rx.
Packet size 32k bytes: 8226.20 KByte/s Tx, 6186.44 KByte/s Rx.
Model 9576i (Lacuna)
64 MB RAM
No L2 cache :-(
Windows 95
Burst Mode Rx+Tx
Netio 1.32
Packet size 1k bytes: 1193.33 KByte/s Tx, 936.21 KByte/s Rx.
Packet size 2k bytes: 1290.28 KByte/s Tx, 1114.61 KByte/s Rx.
Packet size 4k bytes: 1389.92 KByte/s Tx, 1575.54 KByte/s Rx.
Packet size 8k bytes: 1741.82 KByte/s Tx, 1541.64 KByte/s Rx.
Packet size 16k bytes: 1969.17 KByte/s Tx, 1173.42 KByte/s Rx.
Packet size 32k bytes: 1934.02 KByte/s Tx, 1874.79 KByte/s Rx.
Model 7013-59H (RS/6000)
1,25 GB RAM
1 MB L2 cache
AIX 4.3.3 (IBM's driver)
Netio 1.32
Packet size 1k bytes: 5113.53 KByte/s Tx, 6395.88 KByte/s Rx.
Packet size 2k bytes: 6059.71 KByte/s Tx, 7318.23 KByte/s Rx.
Packet size 4k bytes: 7213.85 KByte/s Tx, 7667.44 KByte/s Rx.
Packet size 8k bytes: 7877.74 KByte/s Tx, 8501.13 KByte/s Rx.
Packet size 16k bytes: 8819.90 KByte/s Tx, 8811.09 KByte/s Rx.
Packet size 32k bytes: 8627.93 KByte/s Tx, 9067.06 KByte/s Rx.
Model 7006-42T (RS/6000)
192 MB RAM
0.5 MB L2 cache
AIX 4.3.3 (IBM's driver)
Netio 1.32
Packet size 1k bytes: 2558.84 KByte/s Tx, 619.03 KByte/s Rx.
Packet size 2k bytes: 2754.74 KByte/s Tx, 1297.03 KByte/s Rx.
Packet size 4k bytes: 3340.64 KByte/s Tx, 3816.94 KByte/s Rx.
Packet size 8k bytes: 4013.11 KByte/s Tx, 3763.10 KByte/s Rx.
Packet size 16k bytes: 4551.11 KByte/s Tx, 3802.62 KByte/s Rx.
Packet size 32k bytes: 4562.02 KByte/s Tx, 4275.55 KByte/s Rx.
What's interesting is the mixed performance on the RS/6000 systems. The later mid-range workstation machine with a faster CPU performs worse than the early 1992 high-performance server class system. I assume, the 7013 was built specifically for high-throughput and low-latency applications.
In particular, the memory bandwidth is insane on the POWER2 systems.
The MHz is also misleading, it is a superscalar processor with 2 integer
units and has double the cache (10ns I believe). It would be
interesting to compare the context switch time of the PowerPC.
Post by Christian Holzapfel
That the Lacuna and Reply perform so low is disappointing, but maybe there's a reason why IBM did not sell this adapter to PS/2 users - or maybe we will find a magic switch to make the card operate faster.
I suspect you may be able to profit from games with the tx and rx
interrupt masks.. the Linux pcnet32 driver looks like it has a (limited)
poll mode which would be preferable to dealing with an interrupt for
every packet on this old hw.
lharr...@gmail.com
2023-12-01 23:23:23 UTC
Permalink
Post by Kevin Bowling
Post by Christian Holzapfel
Model 6886 (PC 750)
192 MB RAM
Windows 95
Burst Mode Rx+Tx
Netio 1.32
Packet size 1k bytes: 4751.55 KByte/s Tx, 4547.97 KByte/s Rx.
Packet size 2k bytes: 6082.19 KByte/s Tx, 4748.86 KByte/s Rx.
Packet size 4k bytes: 6772.06 KByte/s Tx, 5595.94 KByte/s Rx.
Packet size 8k bytes: 7378.91 KByte/s Tx, 5575.82 KByte/s Rx.
Packet size 16k bytes: 7996.92 KByte/s Tx, 6135.80 KByte/s Rx.
Packet size 32k bytes: 8226.20 KByte/s Tx, 6186.44 KByte/s Rx.
Model 9576i (Lacuna)
64 MB RAM
No L2 cache :-(
Windows 95
Burst Mode Rx+Tx
Netio 1.32
Packet size 1k bytes: 1193.33 KByte/s Tx, 936.21 KByte/s Rx.
Packet size 2k bytes: 1290.28 KByte/s Tx, 1114.61 KByte/s Rx.
Packet size 4k bytes: 1389.92 KByte/s Tx, 1575.54 KByte/s Rx.
Packet size 8k bytes: 1741.82 KByte/s Tx, 1541.64 KByte/s Rx.
Packet size 16k bytes: 1969.17 KByte/s Tx, 1173.42 KByte/s Rx.
Packet size 32k bytes: 1934.02 KByte/s Tx, 1874.79 KByte/s Rx.
Model 7013-59H (RS/6000)
1,25 GB RAM
1 MB L2 cache
AIX 4.3.3 (IBM's driver)
Netio 1.32
Packet size 1k bytes: 5113.53 KByte/s Tx, 6395.88 KByte/s Rx.
Packet size 2k bytes: 6059.71 KByte/s Tx, 7318.23 KByte/s Rx.
Packet size 4k bytes: 7213.85 KByte/s Tx, 7667.44 KByte/s Rx.
Packet size 8k bytes: 7877.74 KByte/s Tx, 8501.13 KByte/s Rx.
Packet size 16k bytes: 8819.90 KByte/s Tx, 8811.09 KByte/s Rx.
Packet size 32k bytes: 8627.93 KByte/s Tx, 9067.06 KByte/s Rx.
Model 7006-42T (RS/6000)
192 MB RAM
0.5 MB L2 cache
AIX 4.3.3 (IBM's driver)
Netio 1.32
Packet size 1k bytes: 2558.84 KByte/s Tx, 619.03 KByte/s Rx.
Packet size 2k bytes: 2754.74 KByte/s Tx, 1297.03 KByte/s Rx.
Packet size 4k bytes: 3340.64 KByte/s Tx, 3816.94 KByte/s Rx.
Packet size 8k bytes: 4013.11 KByte/s Tx, 3763.10 KByte/s Rx.
Packet size 16k bytes: 4551.11 KByte/s Tx, 3802.62 KByte/s Rx.
Packet size 32k bytes: 4562.02 KByte/s Tx, 4275.55 KByte/s Rx.
What's interesting is the mixed performance on the RS/6000 systems. The later mid-range workstation machine with a faster CPU performs worse than the early 1992 high-performance server class system. I assume, the 7013 was built specifically for high-throughput and low-latency applications.
In particular, the memory bandwidth is insane on the POWER2 systems.
The MHz is also misleading, it is a superscalar processor with 2 integer
units and has double the cache (10ns I believe). It would be
interesting to compare the context switch time of the PowerPC.
Post by Christian Holzapfel
That the Lacuna and Reply perform so low is disappointing, but maybe there's a reason why IBM did not sell this adapter to PS/2 users - or maybe we will find a magic switch to make the card operate faster.
I suspect you may be able to profit from games with the tx and rx
interrupt masks.. the Linux pcnet32 driver looks like it has a (limited)
poll mode which would be preferable to dealing with an interrupt for
every packet on this old hw.
I ran netio with the 3com 10mbit card and found CPU usage drops when transmitting. As the packet size gets bigger the CPU usage obviously drops with 32k dropping to nearly 60% usage. I'll see if I can try the updated driver tonight.

As far as the Reply Board, I looked up an old photo of SpeedSys and the POD83 scores 41.68, for memory bandwidth... L1 Cache is 51.69MB/sec, L2 Cache is 31.54MB/sec, and Memory Throughput is 22.85 MB/sec. Not sure if that's decent or not, but I am pretty sure it beats the Valuepoint... and for some reason the POD83 scores higher on that system. I really think it's down to chipset.
lharr...@gmail.com
2023-12-01 23:35:52 UTC
Permalink
Post by Kevin Bowling
Post by Christian Holzapfel
Model 6886 (PC 750)
192 MB RAM
Windows 95
Burst Mode Rx+Tx
Netio 1.32
Packet size 1k bytes: 4751.55 KByte/s Tx, 4547.97 KByte/s Rx.
Packet size 2k bytes: 6082.19 KByte/s Tx, 4748.86 KByte/s Rx.
Packet size 4k bytes: 6772.06 KByte/s Tx, 5595.94 KByte/s Rx.
Packet size 8k bytes: 7378.91 KByte/s Tx, 5575.82 KByte/s Rx.
Packet size 16k bytes: 7996.92 KByte/s Tx, 6135.80 KByte/s Rx.
Packet size 32k bytes: 8226.20 KByte/s Tx, 6186.44 KByte/s Rx.
Model 9576i (Lacuna)
64 MB RAM
No L2 cache :-(
Windows 95
Burst Mode Rx+Tx
Netio 1.32
Packet size 1k bytes: 1193.33 KByte/s Tx, 936.21 KByte/s Rx.
Packet size 2k bytes: 1290.28 KByte/s Tx, 1114.61 KByte/s Rx.
Packet size 4k bytes: 1389.92 KByte/s Tx, 1575.54 KByte/s Rx.
Packet size 8k bytes: 1741.82 KByte/s Tx, 1541.64 KByte/s Rx.
Packet size 16k bytes: 1969.17 KByte/s Tx, 1173.42 KByte/s Rx.
Packet size 32k bytes: 1934.02 KByte/s Tx, 1874.79 KByte/s Rx.
Model 7013-59H (RS/6000)
1,25 GB RAM
1 MB L2 cache
AIX 4.3.3 (IBM's driver)
Netio 1.32
Packet size 1k bytes: 5113.53 KByte/s Tx, 6395.88 KByte/s Rx.
Packet size 2k bytes: 6059.71 KByte/s Tx, 7318.23 KByte/s Rx.
Packet size 4k bytes: 7213.85 KByte/s Tx, 7667.44 KByte/s Rx.
Packet size 8k bytes: 7877.74 KByte/s Tx, 8501.13 KByte/s Rx.
Packet size 16k bytes: 8819.90 KByte/s Tx, 8811.09 KByte/s Rx.
Packet size 32k bytes: 8627.93 KByte/s Tx, 9067.06 KByte/s Rx.
Model 7006-42T (RS/6000)
192 MB RAM
0.5 MB L2 cache
AIX 4.3.3 (IBM's driver)
Netio 1.32
Packet size 1k bytes: 2558.84 KByte/s Tx, 619.03 KByte/s Rx.
Packet size 2k bytes: 2754.74 KByte/s Tx, 1297.03 KByte/s Rx.
Packet size 4k bytes: 3340.64 KByte/s Tx, 3816.94 KByte/s Rx.
Packet size 8k bytes: 4013.11 KByte/s Tx, 3763.10 KByte/s Rx.
Packet size 16k bytes: 4551.11 KByte/s Tx, 3802.62 KByte/s Rx.
Packet size 32k bytes: 4562.02 KByte/s Tx, 4275.55 KByte/s Rx.
What's interesting is the mixed performance on the RS/6000 systems. The later mid-range workstation machine with a faster CPU performs worse than the early 1992 high-performance server class system. I assume, the 7013 was built specifically for high-throughput and low-latency applications.
In particular, the memory bandwidth is insane on the POWER2 systems.
The MHz is also misleading, it is a superscalar processor with 2 integer
units and has double the cache (10ns I believe). It would be
interesting to compare the context switch time of the PowerPC.
Post by Christian Holzapfel
That the Lacuna and Reply perform so low is disappointing, but maybe there's a reason why IBM did not sell this adapter to PS/2 users - or maybe we will find a magic switch to make the card operate faster.
I suspect you may be able to profit from games with the tx and rx
interrupt masks.. the Linux pcnet32 driver looks like it has a (limited)
poll mode which would be preferable to dealing with an interrupt for
every packet on this old hw.
I ran netio with the 3com 10mbit card and found CPU usage drops when transmitting. As the packet size gets bigger the CPU usage obviously drops with 32k dropping to nearly 60% usage. I'll see if I can try the updated driver tonight.
As far as the Reply Board, I looked up an old photo of SpeedSys and the POD83 scores 41.68, for memory bandwidth... L1 Cache is 51.69MB/sec, L2 Cache is 31.54MB/sec, and Memory Throughput is 22.85 MB/sec. Not sure if that's decent or not, but I am pretty sure it beats the Valuepoint... and for some reason the POD83 scores higher on that system. I really think it's down to chipset.
Forgot, cachechk has different numbers than SpeedSys, for the Reply board Cachechk says L1 is 113.8MB/sec or 9.2ns, L2 is 50.3MB/sec or 20.9ns, and Main memory is 31.6MB/sec or 33.2ns. I've always been curious how the POD83 is gets choked on this system.

Also, I found the Speedsys/cachechk results for the Valuepoint system, Speedsys CPU scores as 60.41, the L1 cache is 69.40MB/sec, L2 is 38.73MB/sec, and Main memory is 26.06MB/s. Cachechk says L1 is 114.8MB/sec or 9.1ns, L2 is 49.2MB/sec or 21.3ns, and Main memory is 28.1Mb/sec or 37.3ns. So... I was wrong, the Valuepoint edges out the Reply TurboBoard by a few Mb/sec of memory bandwidth... but would a few MB/sec really make a ~33% dent in CPU performance for these kinds of systems and ranges back then?
Louis Ohland
2023-12-02 00:59:28 UTC
Permalink
https://www.ardent-tool.com/tech/ASIC_Info.html#71G0438

The Lacuna and Reply use BIUs, The Reply 60/80 also actually has an SSC.

_MAYBE_ the inclusion of a BIU turbo-diddles the performance of the 9-K?

A weg to investigate this angle would be to use a Type 3 ["M"] or a Type
4 ["N"] complex, both of which do not have a BIU.
Post by Christian Holzapfel
That the Lacuna and Reply perform so low is disappointing, but maybe
there's a reason why IBM did not sell this adapter to PS/2 users - or
maybe we will find a magic switch to make the card operate faster.
Louis Ohland
2023-12-02 03:38:47 UTC
Permalink
Another factoid, the Lacuna and Reply boards use IDE.

Even _IF_ the BIU / IDE thing is a lead weight, I don't expect a non-IDE
/ non-BIU system to hit 10MB/s...

So even though Albert Einstein breathed air like I do, that correlation
does not mean that we are both geniuses...
Post by Louis Ohland
https://www.ardent-tool.com/tech/ASIC_Info.html#71G0438
The Lacuna and Reply use BIUs, The Reply 60/80 also actually has an SSC.
_MAYBE_ the inclusion of a BIU turbo-diddles the performance of the 9-K?
A weg to investigate this angle would be to use a Type 3 ["M"] or a Type
4 ["N"] complex, both of which do not have a BIU.
Post by Christian Holzapfel
That the Lacuna and Reply perform so low is disappointing, but maybe
there's a reason why IBM did not sell this adapter to PS/2 users - or
maybe we will find a magic switch to make the card operate faster.
Christian Holzapfel
2023-12-04 13:39:36 UTC
Permalink
I did two more benchmarks.
This time I'm on my PC 750 (PCI/MCA) @ 400 MHz again, running Linux 2.2.17, using Netio 1.11 (because I don't have GLIBC3 there) and running the RX direction, so I'm sending data from a modern multi-core, multi-GHz system to the 9-K:

NETIO - Network Throughput Benchmark, Version 1.11
(C) 1997-1999 Kai Uwe Rommel

TCP/IP connection established.
1k packets: 6513 k/sec
2k packets: 5564 k/sec
4k packets: 5805 k/sec
8k packets: 5256 k/sec
16k packets: 6782 k/sec
32k packets: 7308 k/sec

As a comparison, I am using the exact same setup, but sending data to a 9-P card, which is the PCI variant of our MCA 9-K.
It has the same PCnet ethernet chip and chip revision, accompanied by the same amount and speed rating of on-board memory.
The Linux drivers of the PCnet (9-P) and San Remo (9-K) are 99.5 % identical with the exception that the 9-K driver tunnels all control I/O accesses through the ASIC, which is not speed relevant.
All driver parameters like RX/TX queue lengths, interrupt rate, burst settings, error handling and all other configuration of the PCnet chip's registers are exactly the same.
So from the hardware point of view, the only difference is that in the 9-K case, the hardware access is tunneled through the PC 750's PCI-to-MCA bridge, and then through the 9-K's MCA-to-PCI ASIC bridge.

This is how the PCI 9-P performs:

NETIO - Network Throughput Benchmark, Version 1.11
(C) 1997-1999 Kai Uwe Rommel

TCP/IP connection established.
1k packets: 11433 k/sec
2k packets: 11422 k/sec
4k packets: 11533 k/sec
8k packets: 11543 k/sec
16k packets: 11478 k/sec
32k packets: 11404 k/sec

So I dare to conclude that the system itself, Linux, the general driver structure (no matter if 9-P or 9-K), the interrupt rate, the data rate, DMA and CPU usage and of course the PCnet chip itself are generally capable of saturating a 100 Mbit Ethernet link under the exact same operating parameters.
The only difference is that the 9-K is attached to the system through the two bridge chips in my case.

There might be some parameters in IBM's original AIX driver that they adjusted inside the ASIC or PCnet to make those two work better together to squeeze a little more out of it - but generally (see 9-P), there is no obvious misconfiguration of the PCnet chip.
Ryan Alswede
2023-12-04 14:11:41 UTC
Permalink
Post by Christian Holzapfel
There might be some parameters in IBM's original AIX driver that they adjusted inside the ASIC
Would be cool to see what the ASIC parameters are on your AIX server for the fields we aren't able to infer their meanings. Maybe a project for next year when your time allows.

0x1D 1 WO Init 0x00 Written to on Init only
0x1E 1 WO Init 0x4F Written to on Init only
0x1F 1 WO Init 0x04 Written to on Init only
0x20 2 WO Init 0x03FF Written to on Init only
0x22 1 WO Init 0x7F Written to on Init only
Louis Ohland
2023-12-04 14:20:54 UTC
Permalink
I keep having the suspicion that these are passed to the PC-Net
registers. Are these values passed to the PC-Net registers?

0x1D 1 WO Init 0x00 Written to on Init only
0x1E 1 WO Init 0x4F Written to on Init only
0x1F 1 WO Init 0x04 Written to on Init only
0x20 2 WO Init 0x03FF Written to on Init only
0x22 1 WO Init 0x7F Written to on Init only
Post by Ryan Alswede
Post by Christian Holzapfel
There might be some parameters in IBM's original AIX driver that they adjusted inside the ASIC
Would be cool to see what the ASIC parameters are on your AIX server for the fields we aren't able to infer their meanings. Maybe a project for next year when your time allows.
Ryan Alswede
2023-12-04 14:39:37 UTC
Permalink
Post by Louis Ohland
Are these values passed to the PC-Net registers?
Negative, Captain. ASIC only.

I smell DMA settings but IBM won't tell us anything so much as a bit.
Louis Ohland
2023-12-04 15:06:25 UTC
Permalink
You smell that? It's DMA. Nothing else in the world that smells like it.
DMA... it smells like... Victory!
Post by Ryan Alswede
I smell DMA settings but IBM won't tell us anything so much as a bit.
Wolfgang Gehl
2023-12-06 22:48:49 UTC
Permalink
Post by Christian Holzapfel
So from the hardware point of view, the only difference is that in the 9-K case, the hardware access is tunneled through the PC 750's PCI-to-MCA bridge, and then through the 9-K's MCA-to-PCI ASIC bridge.
My guess is a timing problem in the PCI-to-MCA bridge. May I ask you for
another test run? The PC 750 supports a PCI bus clock of 50MHz. I could
imagine that the MCA bus would cope much better with this than with the
66MHz bus clock.

The AMD K6-III can handle an external clock of 50MHz according to the
PowerLeap upgrade manual:
https://ardent-tool.com/CPU/PL-K6-IIIv2.pdf
It then runs internally at 300 MHz. That should be enough to get
100Mbits out of the LAN.
Christian Holzapfel
2023-12-07 11:29:20 UTC
Permalink
Post by Wolfgang Gehl
My guess is a timing problem in the PCI-to-MCA bridge. May I ask you for
another test run? The PC 750 supports a PCI bus clock of 50MHz. I could
imagine that the MCA bus would cope much better with this than with the
66MHz bus clock.
With a 50 MHz base clock instead of 66, the adapter tops out at ~6700 k/sec:

NETIO - Network Throughput Benchmark, Version 1.7
(C) 1997-1999 Kai Uwe Rommel

TCP/IP connection established.
1k packets: 5326 k/sec
2k packets: 6079 k/sec
4k packets: 6273 k/sec
8k packets: 6648 k/sec
16k packets: 6755 k/sec
32k packets: 6680 k/sec

I guess the PCI clock is always BaseClock/2 on my system.
Wish I had a datasheet for the PC 750 clock chip, IMI SC471...
Louis Ohland
2023-12-07 13:32:33 UTC
Permalink
Let's play the name game.

IMI is who? Perhaps we can findt who bought them.
Post by Christian Holzapfel
IMI SC471
Louis Ohland
2023-12-07 13:50:23 UTC
Permalink
International Microcircuits Inc. (IMI)
Post by Louis Ohland
Let's play the name game.
IMI is who? Perhaps we can findt who bought them.
Post by Christian Holzapfel
IMI SC471
Louis Ohland
2023-12-07 13:54:27 UTC
Permalink
In 2000, Cypress acquired IMI. After the acquisition, Refioglu ran the
timing technology division for Cypress. Then, he left the company and
started SpectraLinear in 2006. The startup then acquired Cypress' PC
clock-chip division in October of 2006.
Post by Louis Ohland
International Microcircuits Inc. (IMI)
Post by Louis Ohland
Let's play the name game.
IMI is who? Perhaps we can findt who bought them.
Post by Christian Holzapfel
IMI SC471
Louis Ohland
2023-12-07 14:05:29 UTC
Permalink
https://www.ic72.com/pdf_file/i/26693.pdf

Still haven't got the website yet.

Not the SC471, but it's something
Post by Louis Ohland
In 2000, Cypress acquired IMI. After the acquisition, Refioglu ran the
timing technology division for Cypress. Then, he left the company and
started SpectraLinear in 2006. The startup then acquired Cypress' PC
clock-chip division in October of 2006.
Post by Louis Ohland
International Microcircuits Inc. (IMI)
Post by Louis Ohland
Let's play the name game.
IMI is who? Perhaps we can findt who bought them.
Post by Christian Holzapfel
IMI SC471
Louis Ohland
2023-12-07 14:08:47 UTC
Permalink
https://ardent-tool.com/datasheets/IMI_SC425.pdf
Post by Louis Ohland
https://www.ic72.com/pdf_file/i/26693.pdf
Still haven't got the website yet.
Not the SC471, but it's something
Post by Louis Ohland
In 2000, Cypress acquired IMI. After the acquisition, Refioglu ran the
timing technology division for Cypress. Then, he left the company and
started SpectraLinear in 2006. The startup then acquired Cypress' PC
clock-chip division in October of 2006.
Post by Louis Ohland
International Microcircuits Inc. (IMI)
Post by Louis Ohland
Let's play the name game.
IMI is who? Perhaps we can findt who bought them.
Post by Christian Holzapfel
IMI SC471
Louis Ohland
2023-12-07 14:16:01 UTC
Permalink
www.imicorp.com
Post by Louis Ohland
https://ardent-tool.com/datasheets/IMI_SC425.pdf
Post by Louis Ohland
https://www.ic72.com/pdf_file/i/26693.pdf
Still haven't got the website yet.
Not the SC471, but it's something
Post by Louis Ohland
In 2000, Cypress acquired IMI. After the acquisition, Refioglu ran
the timing technology division for Cypress. Then, he left the company
and started SpectraLinear in 2006. The startup then acquired Cypress'
PC clock-chip division in October of 2006.
Post by Louis Ohland
International Microcircuits Inc. (IMI)
Post by Louis Ohland
Let's play the name game.
IMI is who? Perhaps we can findt who bought them.
Post by Christian Holzapfel
IMI SC471
Louis Ohland
2023-12-07 14:18:10 UTC
Permalink
ftp://ftp.best.com/pub.i/imiweb8/ds/sc471.pdf
Post by Louis Ohland
www.imicorp.com
Post by Louis Ohland
https://ardent-tool.com/datasheets/IMI_SC425.pdf
Post by Louis Ohland
https://www.ic72.com/pdf_file/i/26693.pdf
Still haven't got the website yet.
Not the SC471, but it's something
Post by Louis Ohland
In 2000, Cypress acquired IMI. After the acquisition, Refioglu ran
the timing technology division for Cypress. Then, he left the
company and started SpectraLinear in 2006. The startup then acquired
Cypress' PC clock-chip division in October of 2006.
Post by Louis Ohland
International Microcircuits Inc. (IMI)
Post by Louis Ohland
Let's play the name game.
IMI is who? Perhaps we can findt who bought them.
Post by Christian Holzapfel
IMI SC471
Louis Ohland
2023-12-07 14:19:45 UTC
Permalink
https://web.archive.org/web/19970409051608/http://www.imicorp.com/products/prodlit/sfds/sc471.htm

crappy...
Post by Louis Ohland
ftp://ftp.best.com/pub.i/imiweb8/ds/sc471.pdf
Post by Louis Ohland
www.imicorp.com
Post by Louis Ohland
https://ardent-tool.com/datasheets/IMI_SC425.pdf
Post by Louis Ohland
https://www.ic72.com/pdf_file/i/26693.pdf
Still haven't got the website yet.
Not the SC471, but it's something
Post by Louis Ohland
In 2000, Cypress acquired IMI. After the acquisition, Refioglu ran
the timing technology division for Cypress. Then, he left the
company and started SpectraLinear in 2006. The startup then
acquired Cypress' PC clock-chip division in October of 2006.
Post by Louis Ohland
International Microcircuits Inc. (IMI)
Post by Louis Ohland
Let's play the name game.
IMI is who? Perhaps we can findt who bought them.
Post by Christian Holzapfel
IMI SC471
JWR
2023-12-07 15:20:45 UTC
Permalink
Post by Louis Ohland
https://web.archive.org/web/19970409051608/http://www.imicorp.com/products/prodlit/sfds/sc471.htm
crappy...
Post by Louis Ohland
ftp://ftp.best.com/pub.i/imiweb8/ds/sc471.pdf
Post by Louis Ohland
www.imicorp.com
Post by Louis Ohland
https://ardent-tool.com/datasheets/IMI_SC425.pdf
Post by Louis Ohland
https://www.ic72.com/pdf_file/i/26693.pdf
Still haven't got the website yet.
Not the SC471, but it's something
In 2000, Cypress acquired IMI. After the acquisition, Refioglu ran the timing technology division for Cypress. Then, he left the company and started SpectraLinear in 2006. The startup then acquired Cypress' PC clock-chip division in October of 2006.
Post by Louis Ohland
International Microcircuits Inc. (IMI)
Post by Louis Ohland
Let's play the name game.
IMI is who? Perhaps we can findt who bought them.
Post by Christian Holzapfel
IMI SC471
Would this help?:

https://www.digchip.com/datasheets/search.php?pn=IMISC471
--
Jelte,
Admirer of the letter of IBM with blue Ishiki
Christian Holzapfel
2023-12-07 17:14:22 UTC
Permalink
Brilliant, thank you two!
By unsoldering pin 11 (S0) from CPU clock chip U1 and tying it to GND I should be able to achieve 80 MHz on the CPU and 40 on the PCI bus. Quite risky with that much overclocking, but worth a try, generally.
But it's getting off topic. If I ever try it and things get a 9-K-ish twist or smell again, I will post here :-)
Louis Ohland
2023-12-07 14:44:46 UTC
Permalink
PROCESSOR SPECIFIC CLOCK GENERATOR, 80MHZ, CMOS, PDSO28

still nothing
Post by Christian Holzapfel
Post by Wolfgang Gehl
My guess is a timing problem in the PCI-to-MCA bridge. May I ask you for
another test run? The PC 750 supports a PCI bus clock of 50MHz. I could
imagine that the MCA bus would cope much better with this than with the
66MHz bus clock.
NETIO - Network Throughput Benchmark, Version 1.7
(C) 1997-1999 Kai Uwe Rommel
TCP/IP connection established.
1k packets: 5326 k/sec
2k packets: 6079 k/sec
4k packets: 6273 k/sec
8k packets: 6648 k/sec
16k packets: 6755 k/sec
32k packets: 6680 k/sec
I guess the PCI clock is always BaseClock/2 on my system.
Wish I had a datasheet for the PC 750 clock chip, IMI SC471...
Christian Holzapfel
2023-12-03 12:12:49 UTC
Permalink
Post by ***@gmail.com
What are the chances of a DOS driver?
The leaked sources for the original PCnet DOS packet driver are around.
They are plain x86 Assembly, processed by

# MAKE Version 3.6
# TASM Version 3.1
# TLINK Version 5.1

Generally, the places such a driver needs modification to work with our 9-K are well known to Ryan and me and properly documented now - but I'm not that fluent in Assembly (yet).
Furthermore, the DOS driver is working in 16-bit mode only, while the 9-K ASIC and also the PCnet chip in our case need some 32 bit addressing.
So it's a little more to it, but generally doable.
Maybe in the boring, gray start of next year I could look into it.
If someone else is willing to pick that up, I'm happy to help :-)
lharr...@gmail.com
2023-12-03 22:05:50 UTC
Permalink
Post by Christian Holzapfel
Post by ***@gmail.com
What are the chances of a DOS driver?
The leaked sources for the original PCnet DOS packet driver are around.
They are plain x86 Assembly, processed by
# MAKE Version 3.6
# TASM Version 3.1
# TLINK Version 5.1
Generally, the places such a driver needs modification to work with our 9-K are well known to Ryan and me and properly documented now - but I'm not that fluent in Assembly (yet).
Furthermore, the DOS driver is working in 16-bit mode only, while the 9-K ASIC and also the PCnet chip in our case need some 32 bit addressing.
So it's a little more to it, but generally doable.
Maybe in the boring, gray start of next year I could look into it.
If someone else is willing to pick that up, I'm happy to help :-)
I've been pretty decent writing code for c# and such, but not sure if I could pick this up and be useful. I'd have to start reading about it and see.


As far as the 9x driver, I have been unable to get the updated driver working at all. Even with a fresh install of Windows 95. I'll get an IP but can't ping the gateway, the reply times out. I didn't change any of the default settings of the driver... I think the buffer or whatever was set to RX + TX. This weekend has been bit busier than I expected and I have to run so.. not sure how much I can play with it more this weekend. Also... with the fresh install, the 32bit driver for my BusLogic card is intermittently loading... which what the hell?!?! So I want to nuke the install again and find a more trusted ISO of Windows 95 C. Downloaded some fresh copies last night and had one fail that it couldn't find a file in a cab in the middle of the install... turns out people commented on the download page that the ISO was wonky. I dont burn the ISO's to CD-R as ZuluSCSI will emulate them as a CDROM if you put the ISO on the SDCard and name it properly.


Anyway, with the fresh install, performance was a tad better. I truly think the reply board is slow in general. The 486 Overdrive 100mhz and Kingston TurboChip also both score about the same as the POD83... which is 20% slower than any rando board out there that can take these chips. I've had a few others with the Reply board get the same results. Not sure why.


TCP connection established ...
Receiving from client, packet size 1k ... 1267.69 KByte/s
Sending to client, packet size 1k ... 1425.52 KByte/s
Receiving from client, packet size 2k ... 1583.88 KByte/s
Sending to client, packet size 2k ... 1693.53 KByte/s
Receiving from client, packet size 4k ... 2123.17 KByte/s
Sending to client, packet size 4k ... 1846.58 KByte/s
Receiving from client, packet size 8k ... 2132.16 KByte/s
Sending to client, packet size 8k ... 1903.23 KByte/s
Receiving from client, packet size 16k ... 2304.99 KByte/s
Sending to client, packet size 16k ... 1941.89 KByte/s
Receiving from client, packet size 32k ... 2434.78 KByte/s
Sending to client, packet size 32k ... 1929.03 KByte/s
Louis Ohland
2023-12-03 22:58:41 UTC
Permalink
Seems the secret is using a non-IBM SCSI controller with the RP2040
ZuluSCSI.

BusLogic BT-646 / SDC3211F
ZuluSCSI RP2040
Post by ***@gmail.com
Post by Christian Holzapfel
Post by ***@gmail.com
What are the chances of a DOS driver?
The leaked sources for the original PCnet DOS packet driver are around.
They are plain x86 Assembly, processed by
# MAKE Version 3.6
# TASM Version 3.1
# TLINK Version 5.1
Generally, the places such a driver needs modification to work with our 9-K are well known to Ryan and me and properly documented now - but I'm not that fluent in Assembly (yet).
Furthermore, the DOS driver is working in 16-bit mode only, while the 9-K ASIC and also the PCnet chip in our case need some 32 bit addressing.
So it's a little more to it, but generally doable.
Maybe in the boring, gray start of next year I could look into it.
If someone else is willing to pick that up, I'm happy to help :-)
I've been pretty decent writing code for c# and such, but not sure if I could pick this up and be useful. I'd have to start reading about it and see.
As far as the 9x driver, I have been unable to get the updated driver working at all. Even with a fresh install of Windows 95. I'll get an IP but can't ping the gateway, the reply times out. I didn't change any of the default settings of the driver... I think the buffer or whatever was set to RX + TX. This weekend has been bit busier than I expected and I have to run so.. not sure how much I can play with it more this weekend. Also... with the fresh install, the 32bit driver for my BusLogic card is intermittently loading... which what the hell?!?! So I want to nuke the install again and find a more trusted ISO of Windows 95 C. Downloaded some fresh copies last night and had one fail that it couldn't find a file in a cab in the middle of the install... turns out people commented on the download page that the ISO was wonky. I dont burn the ISO's to CD-R as ZuluSCSI will emulate them as a CDROM if you put the ISO on the SDCard and name it properly.
Anyway, with the fresh install, performance was a tad better. I truly think the reply board is slow in general. The 486 Overdrive 100mhz and Kingston TurboChip also both score about the same as the POD83... which is 20% slower than any rando board out there that can take these chips. I've had a few others with the Reply board get the same results. Not sure why.
TCP connection established ...
Receiving from client, packet size 1k ... 1267.69 KByte/s
Sending to client, packet size 1k ... 1425.52 KByte/s
Receiving from client, packet size 2k ... 1583.88 KByte/s
Sending to client, packet size 2k ... 1693.53 KByte/s
Receiving from client, packet size 4k ... 2123.17 KByte/s
Sending to client, packet size 4k ... 1846.58 KByte/s
Receiving from client, packet size 8k ... 2132.16 KByte/s
Sending to client, packet size 8k ... 1903.23 KByte/s
Receiving from client, packet size 16k ... 2304.99 KByte/s
Sending to client, packet size 16k ... 1941.89 KByte/s
Receiving from client, packet size 32k ... 2434.78 KByte/s
Sending to client, packet size 32k ... 1929.03 KByte/s
Loading...