Fig. 1) Topology of demonstration at SC98
Fig. 2) Remote Lecture from USA to Japan
The lecture was held at the Keio University Shonan Fujisawa Campus(SFC), Japan(Fig. 2). The client system room in SFC was set at a distance from the lecture room. The PC that sends and recieves DV data was not in the class room. The Video stream from USA was transported from the client system room to the lecture room via CATV cable. Because the consumer products we used can not convert analog video data input to DV digital format, the video data sent to USA was a picture of the monitor placed in the operating room. The picture on that monitor came from the DV camera in the lecture room.
The lecture was done using half frame rate. There were no packet drops while using half rate during the lecture. We changed rate during the lecture to show and explain about frame discarding.
The network bandwidth used at TransPAC for this lecture is shown in Fig. 3. The graph was created by MRTG. The green area is a five minute exponentially decaying moving average of input bits per second on the USA to Japan Exchange Point. The blue line is a five minute exponentially decaying moving average of output bits per second on the USA to Japan Tokyo Exchange Point.
Fig. 2) Traffic at TransPAC
There were no other traffic using the TransPAC. Thus, the traffic over TransPAC was totally created by our communication tool. The blue line is the highest when the lecture was done. The green area is the highest when we tested full frame rate from USA to Japan. Packet loss due to available bandwidth was detected when we tried full frame rate.
The second effort was the conversation demonstration on iGRID exhibision at ``Super Computer Conference 98'' held in Orlando. The network topology of the efforts is same as the inter-continent lecture. The DV data to Korea is built by point-to-multipoint VC(Virtual Connection) ATM switching. The effective bandwidth of trans Pacific ATM and to Korea link was limited to 30Mbps. The transpacific link is the bandwidth bottle neck of the route path, the value is not sufficient to tranfer full rate DV data. Moreover, we shared network link with commodity IP traffic on StarTAP and vBNS with best effort basis. There were common Internet traffic property as packet losses and jitter with congestion while our effort. Therefore, the band width changed from 1/100 frame to half-flame rate adaptively to end-to-end network situation.
We compared the latency with a cellular call from USA to Japan. The latency using the cellular was the same as our commnication tool. Thus, our communication tool is capable for communication with no problems.
The network bandwidth used at TransPAC for this demonstration is shown in Fig. 4. The graph was created by MRTG. The graph was generated by at the same place as the one in Fig. 3.
Fig. 3) Traffic at TransPAC
Much bandwidth is used for traffic from Japan to USA, than USA to Japan. The iGRID demonstration was held in USA. Thus, frame rate from Japan to USA was set higher than the frame rate for USA to Japan.
We had some routing problems and physical link problems in the SCInet during the demonstration in SC98. Though we experienced 0 - 40 % packet loss during the SC98 demonstration, we did not experience difficulty communicating with Japan using our application.