Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Performance optimization when delivering huge data sets #291

Closed
Labels
@konsultaner

Description

In my projects I use arango to calculate most of the business logic and project data sets. At about 100k of JSON lines of resulting code, I experience low performance of data delivery. Guess this is mostly caused by the serialization process. Which is absolutly ok!

All my servers are optimized to deliver high loads of data by using zero-copy mechanism of netty. Thats why I would like to reduce the cpu usage by skipping the serialization process.

Is it somehow possible to get the very raw velocy stream? I would like to post huge results directly to the client and decode it there.

It would be perfect if I could somehow get the stream into a direct buffer. Since the driver does not use netty it's probably not possible.

If this issue seems too off topic please just close it.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

      Relationships

      None yet

      Development

      No branches or pull requests

      Issue actions

        AltStyle によって変換されたページ (->オリジナル) /