Skip to content

Cache Distribution #40

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 7 commits into from
Closed

Cache Distribution #40

wants to merge 7 commits into from

Conversation

siennathesane
Copy link
Collaborator

Adds io.Writer interface to flush cache on eviction and/or manually via new bigcache.Flush() API to an arbitrary destination.

Tests to match.

Mike Lloyd and others added 6 commits March 15, 2017 15:39
Signed-off-by: Mike Lloyd <[email protected]>
This change would allow users to either flush the cache locally to disk
or to buffer it over a network connection where it can be sent downstream.

Signed-off-by: Mike Lloyd <[email protected]>
Signed-off-by: Mike Lloyd <[email protected]>
@coveralls
Copy link

coveralls commented Jul 7, 2017

Coverage Status

Coverage decreased (-1.3%) to 97.588% when pulling f07a8a0 on mxplusb:buffer-flush into b6bd76a on allegro:master.

… wrong.

So I started over with a different encoder.

Signed-off-by: Mike Lloyd <[email protected]>
@coveralls
Copy link

coveralls commented Jul 7, 2017

Coverage Status

Coverage decreased (-1.9%) to 96.957% when pulling 42b1cc1 on mxplusb:buffer-flush into b6bd76a on allegro:master.

@siennathesane siennathesane changed the title Cache flush Cache Distribution Jul 7, 2017
@siennathesane
Copy link
Collaborator Author

So to use gob encoding in Go, you have to have a fixed-width data type, so no slices. Due to the sharding design, without doing an entire rewrite of the whole sharding subsystem I won't be able to use gob encoding for flushing data to a writer. I found this out once I started looking at where coveralls said the test coverage was lower. I followed a crazy rabbit hole of good and bad ideas, and decided encoding/gob wasn't he right solution.

Once gob was determined to not be the right solution (spent like two hours on it), JSON seemed to be the next appropriate encoder. In order to have the proper schema for exporting the cache, I had a couple options. I could either continue with unexported field off the primary struct types and implement a custom marshaler, which would copy the relevant data into a new unexported struct with exported fields, or I could export the necessary fields on the current structs. The latter seemed less invasive and indirect.

Now that this aspect is done, I'm going to work on cache importing via io.Reader. With both of those interfaces in place, you'd then be able to export the cache and/or import a pre-built cache without having to repopulate it.

This is useful for distributed cache design across multiple machines, especially if you have a cache cluster, with a master -> slaves relationship for HA/Failover designs.

Copy link
Collaborator

@janisz janisz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for PR.

I don't understand this PR. Title is misleading. What feature you really want to add? Can you start with documentation? It will review if we all understand what it should do.

JSON seemed to be the next appropriate encoder.

JSON is the worst data format you can use. It's designed for people not machines.

maxShardSize uint32
Config Config
ShardMask uint64
MaxShardSize uint32
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you changed visibility of this fields? This makes PR unreadable and cover main purpose of this PR.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And breaks API.

@janisz janisz requested review from janisz, druminski and adamdubiel July 7, 2017 06:31
@janisz
Copy link
Collaborator

janisz commented Jul 24, 2017

Closing due to lack of activity. Please reopen if you want to work on reported issues.

@janisz janisz closed this Jul 24, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants