• If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • Whenever you search in PBworks or on the Web, Dokkio Sidebar (from the makers of PBworks) will run the same search in your Drive, Dropbox, OneDrive, Gmail, Slack, and browsed web pages. Now you can find what you're looking for wherever it lives. Try Dokkio Sidebar for free.

View
 

FranklinCacheConsistency

Page history last edited by PBworks 15 years, 10 months ago

Paper

  • related work: shared memory across compilers.  isn't applicable: sharing across programs, knowing critical points beforehand, no fault tolerance
  • all caches must handle:
    • write-invalidate and write-broadcast
    • broadcast changes or mtaintain a directorie
  • file systems support much less stringent notions of correctness
  • shared disk DBS nodes have less user locality, communication costs are much lower, use P2P instead of client-server
  • while "page server" DBMS have less conflict, they have to worry about sequential sharing
  • taxonomy levels:
    • detection (all access to data must be confirmed)
      • validity check initiation
      • change notification hints
      • remote update action
    • avoidance (clients never have opportunity to access stale data)
      • write intention declaration
      • write permission duration
      • remote conflict priority
      • remote update action
  • write permission fault occurs when a client tries to write to a page for which it doesn't haver permission
    • can ask synchronously
    • asynchronously
    • or defer it to end of T
  • comparing invalid access prevention:
    • CB-A: callback-all avoids invalid access by calling back to clients to see if it can lock pages, other wise clients retain locks
    • C2PL: caching 2 phase locking detects problems by having clients send version information, sever tells them if it's ok
    • similarities: inter-T caching, no propigation of updated pages, consistancy done synchonously
    • CB-R only keeps read permisions, writing is similar to C2PL
  • write intention declaration:
    • CB-R: callback-read requires client communication with server to write to a page
    • O2PL-I: optimistic 2PL the client write to page, then sends it on update (at commit, server invalidates client pages)
    • similarities: retain write permissions till end of transaction, both use invalidation for commit action
  • write permision duration:
    • CB-R: write permission needed from server
    • CB-A: write permission kept until requested by server
    • similarities: everything else
  • remote update action:
    • O2PL-I: server invalidates pages on clients when it commits
    • O2PL-P: server propigates pages on commit
    • similarities: everything else
  • testing done using DeNet
    • clients have
      • transaction source (sends or resends transaction requests)
      • client manager (coordinates execution of Ts)
      • buffer manager (LRU pages)
      • resource manager (CPU)
    • servers have
      • concurrency control manager
      • server manager (coords transactions)
      • buffer manager
      • resource manager (disk, CPU)
  • client workloads:
    • private: hot region on each client, shared cold region use in RO manner (CAD environment where one person is working on a section but references common libs)
    • hotcold: high degree of locality per client, moderate amount of RW sharing
    • uniform: low locality; caching shouldn't help much; higher lever of contention/sharing than hotcold
    • feed: some clients write, some read (stock quote environment)
  • large cache clients, slower network:
    • private: B2PL sucks, C2PL to a lesser extent.  both send a lot of messages.  trying to keep stuff on the client is best because no one else will access it
    • hotcold: you can see a spike where the server performance starts to matter.  similar to private, except O2PL-P which ends up sending a lot of wasted messages (page updated)
    • uniform: all of the more comples algos end up sending more messages than the constant (BC)2PL. data contention between clients causes increased number of aborts in O2PL algos
    • feed: only affects propigation differences: O2PL-P readers perform very well because updates a propigated to them before they read
  • CB-A was sensitive to the amount of sharing: increased clients caused increase number of messages to be sent
  • of detection is used, it should be done optimistically; hints can be used to reduce the cost of late detections
  • usually defered write intentions are better, unless there is a lot of contention
  • retaining write permissions is best if a page is more likely to be updated at the client holding the lock than read at another client (CB-A works best in private)
  • propigation very dangerous, very sensitive to cache size, etc. probably best to use dynamic propigation
  • dynamic propigation: switch to invalidation for the page if propigation went unused, or (newdynamic) invalidation until propigation need detected
  •  

 

Lecture

  • popular for OO-DBMS: autoCAD, cooperative development, distributed object caching
  • also called data shipping system
  • object "faulting" approches:
    • wrapper class approch: if it needs data, it will fetch from server
    • memory-mapped data: when there is a page fault, fetch data and load into app addresss space
    • bytecode manipulation (java)
    • used to track object access, fetch data
    • needs to track changes (dirty pages)
  • similar to replication handling, but:
    • dynamic replication
    • second class ownership/replicas (server is always in charge)
  • basic 2PL:
    • primary copy locking: always lock on server, scope of T
    • all 1st time lock req's go to server
    • combines read-lock and get-page
    • server can detect deadlocks
    • invalidate cache at end of T
    • baseline for other protocols
  • caching 2PL:
    • refines B2PL with cross T caching
    • locking still at server
    • 1st read: send version ID, server sends data back iff version out of date
    • server keeps a version ID table for all cached data for speed
    • clients can piggyback "i dropped page" onto other messages to keep server table orderly
  • callback read:
    • aimed at per workstation locality
    • ensure that local cached data is valid at all times
    • data cached across T
    • clients cache read locks for all pages (cache hit -> get read locks from server)
    • miss: ask server for the page (may have to wait for released write locks)
    • client write -> go to server for write lock
    • server must run callbacks upon write locks
    • at end of T, clients send updates, unlocks
  • callback all:
    • R and W locks are cached on pages you have unless you're told otherwise
    • read and cache hits: see CB-R
    • read cache miss: server may have to callback write access first (take back of pages on other clients)
  • optimistic 2PL:
    • ROWA replication with commit time handling of writes to replicas
    • each client has a lock manager
    • server keeps track of copies
    • reads get local locks on clients, and only short locks on the server
    • write are local until commit
      • client sends commit with changes
      • server gets update-copy locks on changed pages
      • server get the same on clients using 2PC
      • uses different lock types (update, write) can help prevent deadlock
    • O2PL-I: once we get an update lock, we invalidate page (1 phase)
    • O2PL-P: propigate page (2 phase)
    • O2PL-N?D: choose between the two based on workloads
  • read paper for performance differences
  •  

Comments (0)

You don't have permission to comment on this page.