logo Use CA10RAM to get 10%* Discount.
Order Nowlogo
(5/5)

you will implement a replicated fault-tolerant key-value store that provides causal consistency. In your key-value store, every key-value pair is replicated to all nodes

INSTRUCTIONS TO CANDIDATES
ANSWER ALL QUESTIONS

CSE138 (Distributed Systems) Assignment 3

Spring 2020

This assignment was written by Reza NasiriGerdeh and Lindsey Kuper, based on Peter Alvaro’s course design. This document is the copyrighted intellectual property of the authors. Do not copy or distribute in any form without explicit permission.

Instructions

General

  • In this assignment, you will implement a replicated fault-tolerant key-value store that provides causal consistency. In your key-value store, every key-value pair is replicated to all nodes (replicas). If a node goes down, the key-value store is still available and can respond to clients’ requests because the other nodes have a copy of all key-value pairs. The causal consistency model captures the causal relationship between operations and ensures that all clients see the operations in causal order regardless of which replica they contact to do the operations. That is, if an event  A happens before event B (B  is potentially caused by A), then all clients must first see A and then

  • You will use Docker to create an image that exposes a key-value store implementing the REST API described in the next

  • Your key-value store does not need to persist the data, e., it can store data in memory only.

  • You need to implement your own key-value store, and not use an existing key-value store such as Redis, CouchDB, MongoDB,

  • You may consider the order of name/value pairs in JSON objects to be For example,

{"foo":"bar","baz":"quux"} is equivalent to {"baz":"quux","foo":"bar"}.

Testing

  • We have provided a test script py that you can use to test your work. It is critical that you run the test script before submitting your assignment. (Note that the test script will take several minutes to run.) The tests provided are the same ones we will run on our side during grading, and we will also run additional tests consistent with the assignment specification. The provided tests are not even close to being exhaustive, and you should certainly do more testing on your own, but they should be enough to get you started.

 

Submission workflow

  • One of the members of your team should create a private GitHub repository named CSE138_Assignment3. Add all the members of the team as collaborators to the

  • Invite the ucsc-cse138-staff GitHub account as a collaborator to the

  • In addition to the project file(s) implementing the key-value store, the repository should contain at the top level:

    • the Dockerfile instructing how to create your Docker image

    • a file txt listing the members of the team

    • a file contribution-notes.txt describing the contributions of each member of the team

    • a file mechanism-description.txt including the description of the mechanisms implemented for causal dependency tracking and detecting that a replica is down (see below for more information about this)

  • Commit early and often as you work on the assignment! We want to see your

  • Submit your team name, your repository URL, and the commit ID that you would like to be used for grading to the following Google form: https://forms.gle/qWojyQXorudGYen8A

  • Only one of the team members needs to submit the

Fault-Tolerant Key-Value Store with Causal Consistency

Your fault-tolerant key-value store supports two kinds of operations: view operations and key-value operations. The main endpoints for view and key-value operations are /key-value-store-view and

/key-value-store, respectively.

The term “view” refers to the current set of replicas that are up and running. To do view operations, a replica sends GET requests (for retrieving the view), PUT requests (for adding a new replica to the view), and DELETE requests (for deleting a replica from the view) to another replica.

To do key-value operations on key , a client sends GET requests (for retrieving the value of key ), PUT requests (for adding the new key or updating the value of the existing key ), and DELETE requests (for deleting key ) to the /key-value-store/ endpoint at a replica. The store returns a response in JSON format as well as the appropriate HTTP status code.

Assume a scenario in which we have three replicas named replica1, replica2, and replica3, each running inside Docker containers connected to a subnet named mynet with IP address range 10.10.0.0/16. The IP addresses of the replicas are 10.10.0.2, 10.10.0.3, and 10.10.0.4, respectively. All instances are exposed at port number 8085. Each replica knows its own socket address (IP address and port number) and the view of the store (socket addresses of all replicas). You should give this external information to the replica when you start the corresponding container.

Create subnet

 To create the subnet mynet with IP range 10.10.0.0/16, execute

$ docker network create --subnet=10.10.0.0/16 mynet

Build Docker image

 Execute the following command to build your Docker image:

$ docker build -t assignment3-img .

Run Docker containers

 To run the replicas, execute

$ docker run -p 8082:8085 --net=mynet --ip=10.10.0.2 --name="replica1"

-e SOCKET_ADDRESS="10.10.0.2:8085" -e VIEW="10.10.0.2:8085,10.10.0.3:8085,10.10.0.4:8085"

assignment3-img

$ docker run -p 8083:8085 --net=mynet --ip=10.10.0.3 --name="replica2"

-e SOCKET_ADDRESS="10.10.0.3:8085" -e VIEW="10.10.0.2:8085,10.10.0.3:8085,10.10.0.4:8085"

assignment3-img

$ docker run -p 8084:8085 --net=mynet --ip=10.10.0.4 --name="replica3"

-e SOCKET_ADDRESS="10.10.0.4:8085" -e VIEW="10.10.0.2:8085,10.10.0.3:8085,10.10.0.4:8085"

assignment3-img

View Operations

In practice, one or more replicas can go down. We need view operations to read or update the view of the replicas in that case. To do view operations, the requests need to be sent from a replica (not a client) to the /key-value-store-view endpoint at another replica.

Here are the view operations that your fault-tolerant key-value store should support:

Read a replica’s view of the store

 $ curl --request GET --header "Content-Type: application/json" --write-out "\n {http_code}\n" http:///key-value-store-view

{"message":"View retrieved successfully","view":} 200

Delete a replica from another replica’s view of the store

 $ curl --request DELETE --header "Content-Type: application/json" --write-out "\n {http_code}\n" --data '{"socket-address": }'

http:///key-value-store-view

{"message":"Replica deleted successfully from the view"} 200

$ curl --request DELETE --header "Content-Type: application/json" --write-out "\n {http_code}\n" --data '{"socket-address": }'

http:///key-value-store-view

{"error":"Socket address does not exist in the view","message":"Error in DELETE"} 404

Add a replica to another replica’s view of the store

$ curl --request PUT --header "Content-Type: application/json" --write-out "\n {http_code}\n" --data '{"socket-address": }'

http:///key-value-store-view

{"message":"Replica added successfully to the view"} 201

$ curl --request PUT --header "Content-Type: application/json" --write-out "\n {http_code}\n" --data '{"socket-address": }'

http:///key-value-store-view

{"error":"Socket address already exists in the view","message":"Error in PUT"} 404

If a replica finds out another replica is down, it deletes that replica from its view and broadcasts a DELETE view request so that the other replicas can do the same. If a new replica is added to the system, it first broadcasts a PUT view request to enable the other replicas to add the new replica to their view. Afterwards, it retrieves all the key-value pairs from one of the replicas and put them into its own local store.

Notice that your key-value store does not require causal consistency for view operations. Moreover, it is up to you to implement the mechanism to detect a replica that is down. You need to provide the description of the chosen mechanism in your mechanism-description.txt file.

Key-value operations under the causal consistency model

You probably already have an intuition for what causal consistency is based on our discussions of happens- before, causal broadcast, and consistent global snapshots. This is the first time, however, that we are applying the idea to a storage system supporting operations such as reads and writes.

In class, we defined causal consistency as follows: Writes that are potentially causally related must be seen by all processes in the same (causal) order. (Writes that are not potentially causally related, that is, writes that are concurrent or independent, may be seen in different orders on different processes.)

What does it mean for writes to be “potentially causally related”? Consider the happens-before relation that we discussed in class:

The happens-before relation → is the smallest binary relation such that:

  1. If 𝐴 and 𝐵 are events on the same process and 𝐴 comes before 𝐵, then 𝐴 → 𝐵.

  2. If 𝐴 is a send and 𝐵 is a corresponding receive, then 𝐴 → 𝐵. 3. If 𝐴 → 𝐵 and 𝐵 → 𝐶, then 𝐴 → 𝐶.

With some tweaks to wording, we can adapt the happens-before relation to our key-value store setting:

  1. If 𝐴 and 𝐵 are operations issued by the same client and 𝐴 happens first, then 𝐴 → 𝐵.

  2. If 𝐴 is a write operation (i.e., PUT or DELETE) and 𝐵 is a read operation (i.e., GET) that reads the value written by 𝐴, then 𝐴 → 𝐵.

  3. If 𝐴 → 𝐵 and 𝐵 → 𝐶, then 𝐴 → 𝐶.

For the purposes of this assignment, you can assume that it is not the case that multiple write operations will write the same value to the same location. In other words, if 𝐵 is a read operation that reads the value

𝑥 from a given location, then assume that there is a unique write operation 𝐴 that wrote 𝑥 to that location. Consider the following example execution:

Client1: PUT(x,1) → PUT(y,2) → PUT(x,3)

Client2:   GET(y)=2 → PUT(x,4)

Client3:   GET(x)=4 → PUT(z,5)

In this example, the following operations are related by the → relation:

PUT(x,1) → PUT(y,2) (Case 1)

PUT(y,2) → PUT(x,3) (Case 1) GET(y)=2 → PUT(x,4) (Case 1) GET(x)=4 → PUT(z,5) (Case 1)

PUT(y,2) → GET(y)=2 (Case 2) PUT(x,4) → GET(x)=4 (Case 2)

PUT(x,1) → PUT(x,3) (Case 3) PUT(x,1) → GET(y)=2 (Case 3) PUT(y,2) → PUT(x,4) (Case 3)

PUT(x,1)  →  PUT(x,4)  (Case 3)

PUT(x,4)  →  PUT(z,5)  (Case 3)

and more, according to transitivity (Case 3)

Notice that PUT(x,3) and PUT(x,4) are concurrent, i.e., they are not ordered by the → relation.

We can now define causal consistency for the purposes of this assignment. A key-value store is causally consistent if both of the following conditions are satisfied:

  1. The effect of a write operation by a client on key will always be visible to a successive read operation on by the same In other words, if a client issues a write and then later issues a read, it must either see what it wrote, or it must see the effect of another write that happened causally later.

  2. Let and be any two keys in the store, and let versions and be versions of and , respectively, such that causally depends on . If a client reads (i.e., GET() returns version ), then of is visible to the client

Our definition of causal consistency implies that if, for example, PUT(x, 1) → PUT(y, 2), all replicas will first do PUT(x, 1) and then PUT(y, 2), because otherwise, the value of 2 for y might be visible to the client while the value of 1 for x is not visible, which violates condition 2 above. To put it simply, we can say a key-value store is causally consistent if all write (PUT or DELETE) operations on all replicas take effect in the order that respects their causality relationship. However, it would be overkill to try to ensure that all operations take place on all replicas in the same order.

Implementation

 You will enforce causal consistency in your key-value store by tracking causal dependencies using causal metadata in request and response messages. The approach you take to do this is entirely up to you, as long as you enforce causal consistency. You need to provide a description of your chosen mechanism for tracking causal dependences in your mechanism-description.txt file. Causal metadata can take various forms; for example, vector clocks are one form of causal metadata (and we recommend that you use vector clocks, but we don’t require it, and there are other approaches that would work).

To implement causal consistency, the client needs to participate in the propagation of causal metadata. In particular, when a client issues a write, it needs to indicate which of its prior reads may have caused the write. Therefore, your key-value store should include a causal-metadata field in responses to client requests. The representation of causal metadata doesn’t matter to the client, but clients will be sure to include it in all requests to your key-value store after receiving it in the first response.

Let’s walk through the first few steps of the example execution from above. Suppose that , , ,

, and are versions of causal metadata generated by PUT(x,1), PUT(y,2), PUT(x,3), PUT(x,4), and PUT(z, 5), respectively.

First, Client1 sends PUT(x,1) to a replica. The causal metadata for this request is empty because it does  not depend on any other PUT operation.



(5/5)
Attachments:

Related Questions

. Introgramming & Unix Fall 2018, CRN 44882, Oakland University Homework Assignment 6 - Using Arrays and Functions in C

DescriptionIn this final assignment, the students will demonstrate their ability to apply two ma

. The standard path finding involves finding the (shortest) path from an origin to a destination, typically on a map. This is an

Path finding involves finding a path from A to B. Typically we want the path to have certain properties,such as being the shortest or to avoid going t

. Develop a program to emulate a purchase transaction at a retail store. This program will have two classes, a LineItem class and a Transaction class. The LineItem class will represent an individual

Develop a program to emulate a purchase transaction at a retail store. Thisprogram will have two classes, a LineItem class and a Transaction class. Th

. SeaPort Project series For this set of projects for the course, we wish to simulate some of the aspects of a number of Sea Ports. Here are the classes and their instance variables we wish to define:

1 Project 1 Introduction - the SeaPort Project series For this set of projects for the course, we wish to simulate some of the aspects of a number of

. Project 2 Introduction - the SeaPort Project series For this set of projects for the course, we wish to simulate some of the aspects of a number of Sea Ports. Here are the classes and their instance variables we wish to define:

1 Project 2 Introduction - the SeaPort Project series For this set of projects for the course, we wish to simulate some of the aspects of a number of

Ask This Question To Be Solved By Our ExpertsGet A+ Grade Solution Guaranteed

expert
Um e HaniScience

511 Answers

Hire Me
expert
Muhammad Ali HaiderFinance

773 Answers

Hire Me
expert
Husnain SaeedComputer science

827 Answers

Hire Me
expert
Atharva PatilComputer science

777 Answers

Hire Me

Get Free Quote!

280 Experts Online