您的当前位置:首页Replication with QoS support for a Distributed Multimedia System

Replication with QoS support for a Distributed Multimedia System

2020-07-20 来源:爱问旅游网
forthe27thEUROMICROConference,WorkshoponMultimediaandTelecommunication;

Warsaw, September 4-6, 2001”

“accepted

Replication with QoS support for a Distributed Multimedia System

Giwon On1, Jens Schmitt1, Michael Liepert1, and Ralf Steinmetz1,2

1: Darmstadt University of TechnologyMerckstr. 25 • 64283 Darmstadt • Germany

2: FhG-IPSI

Dolivostr. 15 • 64293 Darmstadt • Germany

Email: {Giwon.On,Jens.Schmitt,Michael.Liepert,Ralf.Steinmetz}@KOM.tu-darmstadt.de

Abstract

Replicatingdataandservicesatmultiplenetworkedcom-putersincreasestheserviceavailability,fault-toleranceandqualityofservice(QoS)ofdistributedmultimediasys-tems.Inthispaper,wediscusssomerelevantdesignandim-plementationissuesofareplicationmechanismforadistributedmultimediasystemmedianode[1]whichisasoftwareinfrastructuretosharemultimedia-enhancedteachingmaterialsamonglecturegroups.Toidentifynewreplicationrequirements,wefirststudythecharacteristicsofpresentationalmediatypeswhicharehandledinmedian-ode,thenextractnewreplicaunitsandgranularitieswhichhavenotbeenconsideredandnotsupportedinexistingrep-licationmechanisms.Basedonthenewrequirementsandtheresultoffeaturesurveys,weimplementedareplicationmechanismformedianode.Thenextworkingstepistoeval-uate the efficiency of our replica maintenance mechanism.

1. Introduction

Forpracticaluseofadistributedmultimediasystemsuchasmedianode[1]inamultimedia-enhancedteachingenvi-ronmentinwhichafastandconsistentaccessibilityoftheteachingmaterialforallacceptedusersofthesystemshouldbeprovided,theavailabilityofthematerialmustbein-creasedbybypassingavarietyofpotentialerrorsources.Replicationofpresentationmaterialsandmeta-dataisafun-damentaltechniqueforprovidinghighavailability,faulttol-eranceandqualityofservice(QoS)indistributedmultimediasystems[2],andinparticularinmedianode.Forexample,whenauserrequiresaccess(read/write)toapres-entationwhichcomprisesaudio/videodataandsomere-sourceswhicharenotavailableinthelocalmedianodeatthispointoftime,alocalreplicationmanagercopiesthere-quireddatafromtheiroriginallocationandputsitintoei-theroneofthemedianodeslocatednearbyorthelocalmedianodewithoutrequiringanyuserinteraction(usertransparent).Thisfunctionenhancesthetotalperformanceofmedianodebyreducingtheresponsedelaythatisoftencausedduetoinsufficientsystemresourcesatagivenserv-

icetime.Furthermore,becauseoftheavailablereplicainthelocalmedianode,theassurancethatuserscancontinuetheirpresentationinasituationofnetworkdisconnection,issig-nificantly higher than without replica.

Inthispaper,wediscusssomerelevantdesignandimple-mentationissuesofareplicationmechanismforadistribut-edmultimediasystemmedianode[1]whichiscurrentlydevelopedasaninfrastructuretosharemultimedia-en-hancedteachingmaterialsamonglecturegroups.Withthereplicationmechanism,medianodeprovidesenhancedac-cesstopresentationmaterialsinbothconnectedanddiscon-nected operation modes.

Thestructureofthepaperisasfollows.InSection2,weidentifynewreplicationrequirements.Afteranalyzingthecharacteristicsofpresentationalmediatypes,weclassifythreedifferenttypesoftargetreplicasaccordingtotheirgranularity(datasize),requirementofQoSsupport,updatefrequency.Section3presentsthedesignandimplementa-tionissuesforourreplicationmodel.Wedescribethepro-posedreplicationmaintenancemechanism,e.g.howandwhenreplicasarecreatedandhowtheupdatesaresignalledandtransported.InSection4,wegiveanoverviewofrelat-edwork.Themeritsandlimitationsofexistingreplicationmechanismsarediscussedandacomparisonofourap-proachwithpreviousworkisgiven.Weconcludethepaperwithasummaryofourworkandanoutlooktowardspossi-blefutureextensionsofourreplicationmechanisminSec-tion 5.

2. Identifying New Replication Requirements

2.1. Different Types of Presentation Data

Inmedianode,dataorganizationcomprisesthestorageofcontentdataaswellasmetainformationaboutthiscontentdatainastructuredway.ForthepurposeofQoS-basedgen-erationofpresentationfilesandreplication,thesystemre-sourceusagesinformationsuchastheusedmemorynumberoftheloadedmedianodecomponentsiscollectedandman-aged.

Thetypicaldatatypeswhichcanbeidentifiedinmedian-

Table 1: Data categories and their characteristics in medianode

target datapresentationdescriptionorganizationaldatafile/datadescriptionmultimediaresourcessystemresourcesuser session/tokenavailabilityrequirementhighhighhighhighmiddle(low)highconsistencyrequirementmiddle(high)highmiddlemiddlemiddlehighpersistencyyesyesyesyesnonoupdatefrequencylowlowmiddlemiddlehighhighdata sizesmall/middlesmallsmalllargesmallsmallQoSplaybacknot requirednot requirednot requiredrequirednot requirednot requiredglobalinterestyesyesyesyesnot strongnoode are the following:

•Presentationcontents:thistypeofdatacomprisestext,image,audio/videofilesandcanbestoredinfilesystemswhichshouldhandleautomaticdatadistributionandaccess,andalsosupportthemultimediacharacteristicsofthis content type.

•Presentation description data, e.g. XML files.

•Meta-dataofuser,system,domain,andorganizationinformation.User’stitle,group,systemplatform,anduniversity are examples for this meta-data category.

•Meta-dataofsystemresourceusageinformationsuchasmemoryusage,numberofthreadsrunningwithinmedi-anode process, number of loaded bows.

•Meta-data of user session and token information.

•Meta-dataofuser,system,domain,andorganizationinformation.User’stitle,group,systemplatform,anduniversity are examples for this meta-data category.

•Meta-dataofsystemresourceusageinformationsuchasmemoryusage,numberofthreadsrunningwithinmedi-anode process, number of loaded bows.

•Meta-data of user session and token information.

Table1showsanoverviewofthesedatatypeswiththeircharacteristics.

highavailabilityrequirement,asshowninTable1,shouldbereplicatedamongtherunningmedianodes.Weclassifydifferenttypesoftargetreplicasaccordingtotheirgranu-larity(datasize),requirementofQoSsupport,updatefre-quencyandwhethertheirdatatypeis‘persistent’ornot(‘volatile’).Indeed,therearethreeclassesofreplicasinmedianode:•

Metareplicas(replicatedmetadataobjects)thatareper-sistentandofsmallsize.Anexamplewouldbealistofmedianodes(sites)whichcurrentlycontainsanup-to-datecopyofacertainfile.Thislistitselfisreplicatedtoincreaseitsavailabilityandimproveperformance.Ametareplica is a replica of this list.

Softreplicaswhicharenon-persistentandofsmallsize.Thiskindofreplicascanbeusedforreducingthenum-berofmessagesexchangedbetweenthelocalandremotemedianodes,andtherebyreducingthetotalser-viceresponsetime.I.e.,ifalocalmedianodeknowsabouttheavailablelocalsystemresources,thenthelocalreplicationmanagercancopythedesireddataintothelocalstoragebow,andtheservicethatisrequestedfromuserswhichrequiresexactlythesedatacanbeprocessedinashorterresponsetime.Informationabouttheavailablesystemresource,usersessionandthevalidity of user tokens are replicas of this type.

Truereplicaswhicharepersistentandoflargesize.Contentfilesofanymediatype,whichalsomaybepartsofpresentationfilesareTruereplicas.Truereplicasaretheonlyreplicatype,towhichtheendusershaveaccessfordirectmanipulation(updating).Ontheother

2.2. Classification of Target Replicas

Themaingoalofourreplicationsystemistoincreasetheavailabilityofmedianode’sservicesandtodecreasetheresponsetimeforaccessestodatalocatedonothermedian-odes.Tomeetthisgoal,datawhichischaracterizedbya

WebserverBackend LCDB’s single access pointWebserverBackendAccess APICOREAccess APICOREVerifier APIsingle view of LCDBDistributedDB Bow (DDBB)DB/2Verifier APIStorage APIStorage APIDDBBDFSBDistributed FS Bow (DFSB)Volatile SB (VSB)medianode 1multicast RPCVSBmedianode 2VSBDFSBDDBBStorage APIVerifier APICOREAccess APIWebserverBackendmedianode 3Figure 1:medianode architecture with replication serviceside,thesearealsotheonlyreplicatypewhichrequiresthesupportofreallyhighavailabilityandQoSprovi-sion.

Allreplicaswhicharecreatedandmaintainedbyourreplicationsystemareanidenticalcopyoforiginalmedia.Replicaswitherrors(non-identicalcopy)arenotallowedtobecreated.Furthermore,wedonotsupportanyreplicationservice for function calls, and elementary data types.

betweenlogicalandphysicalresources.Sousersdonotneedtoknowwherepresentationresourcesarelocatedphysicallyandhowtheyareaccessed.Requestsfromusers,eitherforreadingorwritinganypresentationmaterials,arefirstsenttotheAccessBowofthelocalmedianodethatrunsontheuser’slocalmachine.Aftersuccessfulcheckoftheaccessibilityfortheuserandtheavailabilityoftherequest-edresources,thecorrespondingstoragebowssendthetar-getdatatotheusers.Figure1illustratestheinterfacepoint,thebowsbuildingtheLCDBandtheinteractionsbetweenthe bows. Some additional remarks on LCDB are in order:•Accordingtothedatatypes,allofthepresentationcon-tentsandtheirmeta-dataarestoredincorrespondingstorage bows.

•The‘front-end’ofthestoragebowAPIprovidesuniqueinterfacefunctions,independentofthedatatypes:thisissimilartotheVFS(virtualfilesystem)interfaceinUNIXsystems.

2.3.Concept of Logically Centralized Database

Foratechnicalrealizationofourproposedreplicationsysteminmedianode,weusetheconceptofaso-called“logicallycentralizeddatabase(LCDB)”whichespeciallyenablesthetransparentaccesstopresentationmaterials.Similartotheconceptoflocation-independentidentifiersindistributeddatabasesystem[3],LCDBenablesamapping

MNBowMNAccessBowMNVerifierBowMNStorageBow(GUI)TelnetBowReplVerifierBowFileBowReplTransportBowXMLBowReplFileSysBowVolatileStorBowFigure 2:The medianode bow (MNBow) class hierarchy•Replicationhastobesupportedformoststoragebows,althoughthenumberofreplicasandtheupdatefre-quency may differ between the individual bows.

•Fortheupdatepropagationbetweenreplicationmanag-ers,amulticastRPC(remoteprocedurecall)communi-cation mechanism is used.

medianodemaybeclientorserverdependingonitscurrentoperations.peer-to-peermodelwiththefollowingfeaturesis used for our replication system:

(a)Everyreplicamanagerkeepstrackofalocalfiletableincluding replica information.

(b)Informationwhetherandhowmanyreplicasarecre-atediscontainedineveryfiletable.I.e.,eachlocalreplicamanagerkeepstrackofwhichremotereplicamanagers(medianode) are caching which replicas.

(c)Anyaccesstothelocalreplicaforreadingisallowed,andguaranteedthatthelocalcachedreplicaisvaliduntilnotified otherwise.

(d)Ifanyupdatehappens,thecorrespondingreplicamanagersendsamulticast-basedupdatesignaltothereplicamanagerswhichhavethereplicaoftheupdatedreplicaandtherefore members of the multicast group.

(e)Topreventexcessiveusageofmulticastaddresses,themulticastIPaddressesthroughwhichthereplicamanag-erscommunicatecanbeorganizedinsmallreplicasub-groups.Examplesforsuchsub-groupsarefiledirectoriesora set of presentations about the same lecture topic.

3. Design and Implementation Issues

3.1. Scope of our Replication System

Inmedianode,wemainlyfocusonthereplicationserviceforaccessingdataintermsof‘inter-medianode’,i.e.be-tweenmedianodes,byprovidingreplicamaintenanceineachmedianode.Consequently,areplicationmanagercanbeimplementedasoneorasetofmedianode’sbowinstanc-esineachmedianode.Thereplicationmanagerscommuni-cateamongeachothertoexchangeupdateinformationthroughthewholemedianodes.Areplicationservicewithinamedianode,i.e.,‘intra-medianode’,isnotconsideredforthefirststageofourimplementation.However,thereplica-tionconceptinthispaperisstraightforwardlyapplicabletothe replication service for intra-medianode scope.

3.3. Implementation Architecture

Toshowa‘proveofconcept’,wehaveimplementedaprototypeoftheproposedreplicationsystemmodelforLinuxplatform(Suse7.0,Redhat6.2).Implementedarethereplicamanager(ReplVerifierBow),updatetransportman-ager(ReplTransportBow),replicaserviceAPIswhichare

3.2. The Replication Mechanism

Basically,ourreplicationsystemdoesnotassumeacli-ent-serverreplicationmodel,becausetherearenofixedcli-entsandserversinthemedianodearchitecture;every

Unix-likefileoperationfunctionssuchasopen,create,read,write,close(ReplFileSysBow),andaVolatilestoragebowwhichmaintainsuser’ssessionandtokeninformation.Fig-ure2showsaclasshierarchyofmedianode’sbasicbowsandofextendedbowsforthereplicationsystem.MNBowistherootclassandthethreebowAPIs,MNAccessBow,MN-VerifierBowandMNStorageBowareimplementedasMN-Bow’schildclass.[14]givesadetaileddescriptionoftheimplemented bows.

Theinteractionmodelformedianode’sbowsisbasedona‘request-response’communicationmechanism.Abowwhichneedstoaccessdataorservicescreatesarequestpacketandsendsittothecore.Accordingtotherequesttype,thecoreeitherprocessestherequestpacketdirectly,orforwardsittoarespectivebow.Theprocessingresultsaresenttotheoriginbowinaresponsepacket.Therequestandresponsepacketscontainallnecessaryinformationforthecommunicationbetweenbowsaswellasforprocessingtherequests.Basedonthisrequest-responsemechanism,weexperimentedsomepresentationscenarioswithandwithouta replication service.

3.4. Initialization of Replication Service

Inthissubsection,wedescribethemedianode’soperationflowwiththereplicationservice.Basically,thereplicationserviceinmedianodebeginsbycreatingmedialistandreplicatablesofthethreereplicatypesineachmedianode.AsshowninFigure3,ReplFileSysBowsendsarequestpacketviathecoretoReplVerifierBowforcreatingamedialistformediadatawhicharelocatedinthelocalmedianode’sfilesystem(steps1~2).Uponreceivingtherequestpacket,ReplVerifierBowcreatesmedialistwhichwillbeusedtocheckthelocalavailabilityofanyrequiredmediadata(step3).ReplVerifierBowthenbuildsthelocalreplicatablesforthetworeplicatypes,TruereplicasandMetareplicas,ifthereplicainformationexistsalready.Amedianodeconfigurationfilecanspecifythedefaultlocationwherereplicainformationisstored.Everytypeofreplicatablecontainsalistofreplicaswiththeinformationaboutorganization,replicavolumeidentifier,uniquefilename,filestate,versionnumber,numberofreplicas,alistofreplica,amulticastIPaddress,andsomeadditionalfileattributes,suchasaccessright,creation/modificationtime,size,owner,andfiletype.ThethirdreplicatablefortheSoftreplicastowhichthelocalsystemresource,usersessionandtokeninformationbelongmaybeneededtobecreatedintermsofmemoryallocation,andthecontentsofthistablecanbepartlyfilledwhenusersrequestsomecertainservices.Oncethereplicatablesarecreated,theyarestoredinthelocalfilesystemandaccessiblepersistently.

3.5. Maintaining Replica Tables

Inmedianode,thesethreereplicatablesaremaintainedlocallybythelocalreplicationmanager.So,thereisnoneedtoexchangeanyupdate-relatedmessagesforthefilesofwhichthereisnoreplicacreated.Thisapproachincreasesthesystemresourceutilization,especiallynet-workresources,bydecreasingthemessagenumbersexchangedbetweenthereplicationmanagersamongthedistributedmedianodes.But,whenanymedianodewantstogetareplicafromthelocalreplicatables,thedesiredrep-licaelementsarecopiedtothetargetmedianode,andthereplicationmanageratthetargetmedianodekeepsthesereplicaelementsseparateinanotherreplicatablewhichisusedonlyforthemanagementofremotereplicas,i.e.forthemanagementofreplicasforwhichtheiroriginalfilesare stored in a remote medianode.

3.6. Acquiring a Replica to Remote ReplicationManagers

Uponreceivingtheservicerequests(dataaccessrequest)fromusers,thelocalmedianodeattemptstoaccessthere-quireddatainalocalstoragebow(ReplFileSysBow)(step4~5).Inthecase,whenthedataisnotavailablelocally,thelocalReplFileSysBowsendsarequestpackettoReplVerifi-erBowtogetareplicaforthedata(step6).TheReplVerifi-erBowthenstartaprocesstoacquireareplicabycreatingacorrespondingrequestpacketwhichispassedtoReplTrans-portBow(steps7~8).TheReplTransportBowmulticastsadatasearchrequesttoallthepeerreplicationmanagersandwaitsforreplicationmanagerstorespond(step9).Thelistofmedianodestowhichthemulticastmessageissentcanbereadfromthemedianode’sconfigurationfile.WhethertheReplTransportBowwaitsforallresponsesorreceivesthefirstoneisdependentontheoptimizationpolicywhichisgivenasconfigurationflag.Afterreceivingthetargetrepli-ca,theReplTransportBowsendsaresponsepackettotheReplVerifierBowwhichthenupdatesthecorrespondingrep-licatables,i.e.,ReplVerifierBowaddsthenewreplicaele-menttotheTruereplicastableanditsmetadatatotheMetareplicastable,respectively(steps10~13).Finally,thelocalReplFileSysBowwhichoriginallyissuedreplicacrea-tionrequestcreatesaresponsepacketincludingthereplicahandleandthensendsittotheMNAccessBow(steps14~15).

3.7. Update Distribution & Transport Mechanism

Theupdatedistributionmechanismsinmedianodedif-fersbetweenthethreereplicatypesandtheirmanagers.Thisisduetothefactthatthethreereplicatypeshavedif-ferentlevelsofrequirementsonandcharacteristicsofhigh

9ReplTransportBowTransportPolicyLibReq/RespQueueDispatcherThreadMNAccessBow4presentFile()en>sw-Bodo_repl_transport()processRequest()10vResponse()Bow->Rec15MNCoreprocessRequest()ueqdRt()es8Core->SendRequest()ReplVerifierBowMeidaListManager3MaintainMediaList()7ReplicaListManagerMaintainReplicaTable()2Bow->RecvRequest()15611ConsistencyControlLibcheck_consistency()processRequest()1214fileOperation()13ReplFileSysBow/VolatileBowFigure3:Serviceflowshowingtheinternalbowinteractionmechanisminmedianode:withreplica-tion supportavailability,updatefrequencyandconsistency.Experiencefrom[4]and[5]alsoshowsthatdifferentiatingupdatedis-tributionstrategiesmakessenseforwebandotherdistribut-ed documents.

Themedianode’sreplicationsystemoffersauniquein-terfacetotheindividualupdatesignallingandtransportpro-tocolswhichareselectivelyanddynamicallyloadedandunloadedfromthereplicatransportmanagerthatisimple-mentedasaninstanceofmedianode’saccessbow.Theup-date transport and signalling protocols used are:

•RPCprotocol[2]asasimpleupdatedistributionproto-col.Thismechanismismainlyusedatthefirststepofour simple and fast implementation.

•AmulticastbasedRPCcommunicationmechanism.Inthiscase,theupdatesarepropagatedviamulticastotherreplicamanagerswhicharemembersofthemulticastgroup.RPC2[6,9]isusedforthefirstimplementation.RPC2offersthetransmissionoflargefiles,suchastheupdatedAVcontentfilesordiff-files,byusingtheSideEffectDescriptor.But,theRPC2withSideEffectDescriptordoesnotguaranteeanyreliabletransportofupdates.

3.8. Approaches for Resolving Update Conflicts

Thepossibleconflictsthatcouldappearduringtheshareduseofpresentationaldataandfilesareeither(a)up-

dateconflictwhentwoormorereplicasofanexistingfileareconcurrentlyupdated,(b)namingconflictwhentwo(ormore)differentfilesaregivenconcurrentlythesamename,and(c)update/deleteconflictthatoccurwhenonereplicaofafileisupdatedwhileanotherisdeleted.Inmostexistingreplicationsystems,theconflictresolvingproblemforup-dateconflictswastreatedasaminorproblem.Itwasarguedthatmostfilesdonotgetanyconflictingupdates,withthereasonthatonlyonepersontendstoupdatethem[8].De-pendingontheusedreplicationmodelandpolicy,therearedifferentapproachestoresolvingupdateconflicts,ofwhichourreplicationsystemwillusethefollowingstrategies[2,6, 7, 11]:

•Swapping-toexchangethelocalpeer’supdatewithother peer’s updates;

•Dominating-toignoretheupdatesofotherpeersandtokeep the local tentative update as a final update;

•Merging-tointegratetwoormoreupdatesandbuildonenew update table;

4. Related Work

Therearemanyworksandapproachestoreplication.TheapproachesdifferfordistributedfilesystemsfromthoseforInternet-baseddistributedwebserversandthosefortransaction-baseddistributeddatabasesystems.Well-knownreplicationsystemsindistributedfilesystemsareCoda[6],Roam[11],Rumor[13]andFicus[17]whichkeepthefileservicesemanticsofUnix.Therefore,theysupporttodevelopapplicationsbasedonthem.Theyarebasedeitheronaclient-servermodelorapeer-to-peermodel.Oftentheyuseoptimisticreplicationwhichcanhidetheeffectsofnetworklatency.Theirreplicationgranularityismostlythefilesystemvolume,withalargesizeandlownumberofreplicas.Thereissomeworkonoptimizationfortheseexamplesconcerningofupdateprotocolandreplicaunit.Tokeepthedelaysmallandthereforemaintainreal-timeinteraction,itwasdesirabletouseanunreliabletransportprotocolsuchasUDP.Intheearlierphases,manyapproachesusedunicast-baseddataexchange,bywhichthereplicationmanagerscommunicatedwitheachotherone-to-one.Thiscausedlargedelaysandpreventedreal-timeinteraction.Toovercomethisproblem,multicast-basedcommunicationhasusedrecently[6,8,15,16].ForCoda,theRPC2protocolisusedformulticast-basedupdateexchange,whichprovideswiththeSideEffectDescriptor transmission of large files.

Forlimitingtheamountofstorageusedbyaparticularreplica,RumorandRoamdevelopedtheselectivereplicationscheme[12].Aparticularuser,whoonlyneedsafewofthefilesinavolume,cancontrolwhichfilestostoreinhislocalreplicawithselectivereplication.A

disadvantageofselectivereplicationisthe‘fullbackstoring’mechanism:ifaparticularreplicastoresaparticularfileinavolume,alldirectoriesinthepathofthatfile in the replicated volume must also be stored.

JetFile[8]isaprototypeddistributedfilesystemwhichusesmulticastcommunicationandoptimisticstrategiesforsynchronizationanddistribution.ThemainmeritofJetFileisitsmulticast-basedcallbackmechanismbywhichthecomponentsofJetFile,suchasfilemanagerandversioningmanagerinteracttoexchangeupdateinformation.Usingthemulticast-basedcallback,JetFiledistributesthecentralizedupdateinformationwhichisnormallykeptbytheserveroveranumberofmulticastrouters.However,themulticastcallbacksinJetFilearenotguaranteedtoactuallyreachallreplicationpeers,andthecentralizedversioningserver,whichisresponsibleforserializationofallupdates,canleadtoaoverloadedsystemstate.Furthermore,noneoftheex-istingreplicationsystemssupportsqualityofservice(QoS)characteristicsof(file)datawhichtheyhandleandreplicate.

5. Summary and Future Work

Replicationofpresentationmaterialsandmeta-dataisafundamentaltechniqueforprovidinghighavailability,faulttoleranceandqualityofservice(QoS)indistributedmulti-mediasystems.Inthispaper,wediscussedsomerelevantdesignandimplementationissuesofareplicationmecha-nismforadistributedmultimediasystemmedianode.Afteranalyzingthecharacteristicsofpresentationalmediatypes,weclassifiedthreedifferenttypesoftargetreplicasaccord-ingtotheirgranularity,requirementofQoSsupport,updatefrequency.Wealsodescribedtheproposedreplicationmaintenancemechanism,e.g.howandwhenreplicasarecreated and how the updates are signalled and transported.Wearecurrentlyintheprocessofimplementingthecon-flictresolvingmechanismandversioningandstorage/trans-portloadlevellingmechanisms,whichareintegratedwiththereplicationmanager.Withtheforthcomingimplementa-tionwewillbeabletobuildmedianodeasahighlyavaila-ble,scalableandcooperative,distributedmediaserverformultimedia-enhancedteaching.Thenextworkingstepsaretoevaluatetheefficiencyofourreplicamaintenancemech-anismandtodesignotherreplicationservicecomponents.Weareintensivelyinvestigatingforthefollowingissuesforextension of our replication system:

•Reliablemulticast-basedupdatedistributionmechanism:inthemulticast-basedreplicationenvironment,therepli-casandtheirupdatesshouldbepropagated100%cor-rectlytoavoidanyinconsistencybetweenreplicas.AlthoughtheRPC2offersthemulticast-basedtransmis-sion,itdoesnotguaranteeanyreliabletransportofupdates.LC-RTP(LossCollectionRTP)[10]isoneof

reliablemulticastprotocolwhichisoriginallydevelopedasanextensionofRTPprotocoltosupportthereliablevideostreamingwithinthemedianodeproject.WeadoptLC-RTPandchecktheusabilityoftheprotocol,depend-ingonthedegreeofreliabilityrequiredfortheindividualgroups of replicas.

•QoS-awarereplicationfordistributedmultimediasys-tems:thedecisionproblemsof(a)whetherareplicashouldbecreatedfromoriginalfileandifthenwhichfilesshouldbereplicated(replicaselectionproblem)and(b)towhichsystemreplicasshouldbeput(replicaplace-mentproblem)aremadebycheckingthecurrentusagesofavailablesystemresources.[18]givesasurveyontheworksrelatedtwotheseproblemsandtheirperformancemodels.

References

[1]The medianode project. (http://www.httc.de/medianode).[2]G.Coulouris,J.DollimoreandT.Kindberg.DistributedSystems, 3rd Ed., Addison-Wesley, 2001.

[3]

A.Eickler,A.KemperandD.Kossman.FindingDataintheNeighborhood.InProc.ofthe23rdVLDBConference,Ath-ens, Greece, 1997.

[4]

P.TriantafillouandD.J.Taylor.MulticlassReplicatedDataManagement:ExploitingReplicationtoImproveEfficien-cy.InIEEETrans.onParallelandDistributedSystems,pages 121-138, Vol.5, No.2, Feb.1994.

[5]

G.Pierre,I.Kuz,M.vanSteenandA.S.Tanenbaum.Dif-ferentiatedStrategiesforReplicatingWebdocuments,InProc.of5thInternationalWorkshoponWebCachingandContent Delivery, Lisbon, May 2000.

[6]

M.Satyanarayanan,J.J.Kistler,P.Kumar,M.E.Okasaki,E.H.Siegel,andD.C.Steer.Coda:AHighlyAvailableFileSystemforaDistributedWorkstationEnvironment.InIEEE Transaction on Computers, 39(4), April 1990.

[7]

J.Yin,L.Alvisi,M.DahlinandC.Lin.VolumeLeasesforConsistencyinLarge-ScaleSystems.InIEEETransactionson Knowledge and Data Engineering, 11(4), July1999.[8]

B.Groenvall,A.WesterlundandS.Pink.TheDesignofaMulticast-basedDistributedFileSystem.InProceedingsofThirdSymposiumonOperatingSystemsDesignandImple-mentation,(OSDI’99),NewOrleans,Louisiana,pages251-264. February, 1999.

[9]

M.SatyanarayananandE.H.Siegel.ParallelCommunica-tioninaLargeDistributedEnvironment.InIEEETrans.onComputers, pages 328-348, Vol.39, No.3, March 1990.[10]

M.Zink,A.Jones,C.GirwodzandR.Steinmetz.LC-RTP(LossCollectionRTP):ReliabilityforVideoCachingintheInternet.InProceedingsofICPADS’00:Workshop,pages281-286. IEEE, July 2000.

[11]

D.Ratner,P.Reiher,andG.Popek.Roam:AScalableRep-licationSystemforMobileComputing.InWorkshoponMobileDatabasesandDistributedSystems(MDDS),Sep-tember1999.(websitehttp://lever.cs.ucla.edu/project-members/reiher/available_papers.html)

[12]

D.H.Ratner.SelectiveReplication:Finegraincontrolof

replicated files.Master’s thesis, UCLA,USA, 1995.

[13]

R.Guy,P.Reiher,D.Ratner,M.Gunter,W.Ma,andG.Popek.Rumor:MobileDataAccessThroughOptimisticPeer-to-PeerReplication.InWorkshoponMobileDataAc-cess,November1998.(websitehttp://lever.cs.ucla.edu/project-members/reiher/available_papers.html).

[14]

G.OnandM.Liepert.Replicationinmedianode.TechnicalReportTR-2000-03,DarmstadtUniversityofTechnology,Germany, September 2000.

[15]

C.Griwodz.Wide-AreaTrueVideo-on-DemandbyaDe-centralizedCache-basedDistributionInfrastructure.PhD.dissertation,DarmstadtUniversityofTechnology,Germa-ny, April 2000.

[16]M.MauveandV.Hilt.AnApplicationDeveloper’s

PerspectiveonReliableMulticastforDistributedIn-teractiveMedia.InComputerCommunicationRe-view, pages 28-38, 30(3), July 2000.

[17]

T.W.Page,Jr.,R.G.Guy,G.J.Popek,andJ.S.Heidemann.ArchitectureoftheFicusscalablereplicatedfilesystem.Technical Report CSD-910005, UCLA,USA, March 1991.[18]

M.NicolaandM.Jarke.PerformanceModelingofDistrib-utedandReplicatedDatabases,inIEEETransactionsonKnowledgeandDataEngineering,12(4),pages645-672,July/Aug. 2000.

因篇幅问题不能全部显示,请点此查看更多更全内容