Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
This repository was archived by the owner on Jul 4, 2025. It is now read-only.

Mechanism for enabling, distributing, and utilizing tools. (janhq blessed and community contributed) #1360

wolfspyre started this conversation in Feature Requests
Discussion options

Given the way Jan / Cortex is laid out... it feels .... like an intuitive step to have a consistent mechanism to empower models to utilize tools.

I can see a few ways of doing this.... (and I'm sure there are others that peeps smarter than me have)

Essentially what I propose is a mechanism to empower models within cortex to perform actions via tools.

These tools should be downloadable,

These tools should be durably associated with the community members that have released/contributed to them.

  • Show community feedback/rating for tools
  • The privilege to provide your feedback / rating for tools should be rescinded for community members who abuse the system
    • To downvote a 'competitor'
    • To shill themselves.
    • To disseminate mal
  • When using a tool, emit (and submit?) some metrics around

how efficiently did I perform this task with this tool?

(time/cpu/memory/steps/tokens/data/etc)

  • This encourages a focus on performant tools... without abusing the users' privacy.
    • The goal being that in addition to 'ease of adoption' there is also a rolling, enduring set of metrics which quantify the effort that was expended to perform the task with the aid of the tool in question.
    • This can allow for 'better' performing tools to become more popular. encouraging the happy path to also include the faster path

obviously, not all metrics are gonna be equal .... there's a lot of handwavey here... I'm trying REALLY HARD not to get into specific nuance ;)

I can see there being:

  • Tools "blessed" by Jan/Cortex
    • Ones that users/the community can trust have been peer reviewed and are not mal
  • Tools "blessed" by another organization.
    • Imagine your bank releasing a tool for your model to use to talk with them
      • Obvs this is a contrived example
  • Tools "blessed" by community members
  • Tools "blessed by the user running cortex
  • Tools 'unblessed' with metrics, and checks disabled.
    • These are fine for 'self' / 'internal' use, but may not be shared with the community at large, as they provide no introspective mechanisms, or ways for the community to assess the accuracy of the claims the toolmaker. hence, the propagation of them should be dissuaded (but not PREVENTED, as I'm sure there are a few legitimate edge cases which could warrant such things... but.... that's the exception :) )

It seems reasonable to me that combining some set of performance metrics and some adoption/usage/rating metrics, could go a long way towards dissuading abuse, whilst allowing a collaboratively inclusive ecosystem with low (hopefully?)ish moderation overhead.

I feel that by having a consistent way of providing tools to cortex, alongside a robust suite of privacy-first telemetry reporting mechanisms, cortex can continue to evolve into a robust wrapper for AI/ML engines as capabilities evolve.

The AI/ML landscape of today is evolving at an incredible pace, thus allowing for the concept of 'engine/model to ....' interactions to be plug-in driven feels like one way to remain fluid in this landscape of shifting sands...

...but maybe I'm missing something?

Thoughts?

You must be logged in to vote

Replies: 0 comments

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
1 participant

AltStyle によって変換されたページ (->オリジナル) /