Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hook up partition compaction end to end implementation #6510

Open
wants to merge 6 commits into
base: master
Choose a base branch
from

Conversation

alexqyle
Copy link
Contributor

@alexqyle alexqyle commented Jan 15, 2025

What this PR does:

Implement partitioning compaction related lifecycle functions to make partitioning compaction end to end working.

  • PartitionCompactionBlockDeletableChecker makes sure no parent blocks got deleted after each compaction. Cleaner would handle parent blocks clean up for partitioning compaction.
  • ShardedBlockPopulator would use ShardedPosting to including particular series in the result block.
  • ShardedCompactionLifecycleCallbackis used to emit partitioning compaction metrics at beginning and end of compaction. It also initialize ShardedBlockPopulator for each compaction.

Which issue(s) this PR fixes:
Fixes #

Checklist

  • Tests updated
  • Documentation added
  • CHANGELOG.md updated - the order of entries should be [CHANGE], [FEATURE], [ENHANCEMENT], [BUGFIX]

…fecycle

Signed-off-by: Alex Le <leqiyue@amazon.com>
Signed-off-by: Alex Le <leqiyue@amazon.com>
Signed-off-by: Alex Le <leqiyue@amazon.com>
Signed-off-by: Alex Le <leqiyue@amazon.com>
Copy link
Contributor

@danielblando danielblando left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@yeya24 yeya24 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@@ -698,15 +754,26 @@ func (c *Compactor) stopping(_ error) error {
}

func (c *Compactor) running(ctx context.Context) error {
// Ensure an initial cleanup occurred as first thing when running compactor.
if err := services.StartAndAwaitRunning(ctx, c.blocksCleaner); err != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a specific reason why we have to move cleaning here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because cleaner cycle might be running for a while depending on how many tenants and how big each tenants are. We don't want to compactor got into unhealthy state in the ring because of long running cleaner process.


func (f *DisabledDeduplicateFilter) DuplicateIDs() []ulid.ULID {
return nil
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we make these types private?
Can you add some comment to DisabledDeduplicateFilter. We want to disable duplicate filter because it makes no sense for partitioning compactor as we always have duplicates?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The DefaultDeduplicateFilter from Thanos would mark duplicate blocks if those blocks are from same group and having same source blocks. In partitioning compactor, partitions from same time range would always or eventually have same source as it is the natural of partitioning compactor. We don't want those blocks got filtered out when doing grouping for next level compaction.


globalMaxt := blocks[0].Meta().MaxTime
g, _ := errgroup.WithContext(ctx)
g.SetLimit(8)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a sane default to set in Cortex?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With my test 8 is good enough to keep CPU busy during compaction. I am wondering if this number is too high for end user, would this just cause CPU usage peaked at 100% for longer time?

}
if b.Meta().MaxTime > globalMaxt {
globalMaxt = b.Meta().MaxTime
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although it is part of the original tsdb implementation and we also pass tsdb metrics in the function. I feel it is kind of unecessary to check block overlapping for partitioning compactor. Especially the info log above seems always there

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make sense. I will remove that part.

pkg/compactor/sharded_posting.go Outdated Show resolved Hide resolved
Signed-off-by: Alex Le <leqiyue@amazon.com>
Signed-off-by: Alex Le <leqiyue@amazon.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants