|
Authored by: jesse on Friday, June 08 2012 @ 12:06 PM EDT |
And a distributed lock manager.
Each host could contribute local disks to the cluster, allowing any node to
read/write to the drives.
The only restriction I remember is that disks used for paging could not be
shared.[ Reply to This | Parent | # ]
|
|
Authored by: complex_number on Friday, June 08 2012 @ 01:18 PM EDT |
The Caching was in the storage controllers.
Does that count?
I could probably have been able to cite the Dec 2-5-2 part numbers for a VAX
Cluster with dual storage controllers (sad but it was my job for a while).
The CI-780 or CI-750 also had caching. Ok, not very much by todays standards but
there was some.
going slighty off topic.
The big thing about the Original VMS Cluster (circa 1983) concept was that the
SC001 was non powered and thus not a single point of failure. (SC= Star
Coupler)
I can remember it all being FCC tested in the 'Bubble' near the Marlborough
Plant in 1983. I was there doing the same thing for the Compact VAX 11/730. I
was the project engineer for it.
---
Ubuntu & 'apt-get' are not the answer to Life, The Universe & Everything which
is of course, "42" or is it 1.618?
[ Reply to This | Parent | # ]
|
|
Authored by: sproggit on Friday, June 08 2012 @ 04:25 PM EDT |
Back when I was about 6 years old ... ;) ...
I used DPS7/DPS7000 systems from Honeywell Bull starting in about 1985. For
years prior to that the DPS7 range had the capability to dynamically share discs
between systems and to do some pretty clever things with caching as well.
For example, within a single machine you could control the number of executing
tasks that could share access to any given file with
FILSHARE=nn;
in it's system configuration file.
More than that, if you had two or more systems with shared disc subsystems, the
use of PSI (Peripheral Subsystem Interconnect) cables allowed the host to run
multi-level caching - i.e. in the CPU and in the disc cabinet - to improve the
performance and throughput.
This technology supported very exotic sharing of systems and caches, so for
example data access to files on a shared volume could be achieved with:-
SHARE=ONEWRITE,DUALSHARE=ONEWRITE;
as parameters in JCL where files were being assigned to program steps. In the
above, the SHARE option governs local, in-host access to a file, DUALSHARE
refers to interoperability with another host, ONEWRITE means that only one
running process can have read/write access to the file, but other can have read
access if they want it [they can detect the contents may be volatile with a
warning at connect time].
There was a special version of these multi-system and shared-cache parameters
which was "DUALSHARE=FREE" that was reserved for what GCOS7 called
"CATALOGS" - essentially a directory listing that recorded extraneous
information about files, such as access rights. Catalogs could do this because
they were essentially functionality-limited CODASYL databases which could
support "record locking".
With the release of GCOS7 V3A7 [I'd have to check back to get the date right]
the system introduced even more intelligence, with RPS [ rotational, positional
sensing ]. Basically, read/write instructions were sent to the disc cluster from
one or more GCOS7 hosts. The disc controller [which could manage clusters of
e.g. 4 drive spindles at a time] would hold the requests in an access queue and
monitor it. The CPU in the drive controller was so fast and powerful that it
knew exactly what sector of the disc was passing under the read/write heads of
the drives, and it would manipulate the instruction queue and cache in real time
to accelerate read/write activity. It took into account things like head
movement, spin speeds and cache size to deliver performance.
I was our company's mainframe Sysadmin, as well as being our "Technology
Operations Manager", which meant that I got to play with the OS builds and
tuning for these machines. I remember "turning on" RPS for the first
time, about 2 weeks after I'd been running V3A7 on my employer's DPS7000. We saw
an 80% reduction in disc activity and a massive improvement in response times
overnight.
I'm not sure I am qualified to read these patents and interpret their claims
correctly, but it seems to me that much of what's being argued here is a mixture
of the above, of SAN technology, of SCSI technology and so on. Taking something
already done in one computing paradigm and moving it to another is not
innovation, it's just a logical extension of what's gone before. [ Reply to This | Parent | # ]
|
|
Authored by: Anonymous on Friday, June 08 2012 @ 07:53 PM EDT |
Didn't some versions of rsync have some type of caching? [ Reply to This | Parent | # ]
|
|
|
|
|