You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The https://docs.rs/flexpolyline/0.1.0/flexpolyline/ crate runs the bench_encode benchmark in around 19 µs 330 ns on my machine, vs 77 µs for our decode function. That's in the 3x ballpark. They seem to use a lot of lookups, which I imagine accounts for the difference, but it would be good to investigate.
The text was updated successfully, but these errors were encountered:
I have looked into their code, I assume you're right, the speed up comes from the explicit conversions via LUTs. In their algorithm they also need it because of the more complex char-map which is not just monotonically increasing from ASCII 63 as in GPolyline. I will try the approach while fixing #39.
The https://docs.rs/flexpolyline/0.1.0/flexpolyline/ crate runs the
bench_encode
benchmark in around 19 µs330 nson my machine, vs 77 µs for our decode function. That's in the 3x ballpark. They seem to use a lot of lookups, which I imagine accounts for the difference, but it would be good to investigate.The text was updated successfully, but these errors were encountered: