I have a telegram userbot, monitoring certain group topic and answering certain messages. I want it to be faster than human, now it's surprisingly not. But I need to speed up its reaction, any tips?
from pyrogram import Client, filters
from decouple import config
from pyrogram.types import Message
import re
api_id = config('API_ID')
api_hash = config('API_HASH')
phone = config('PHONE')
login = config('LOGIN')
pattern = r'^Request'
bot = Client(name=login, api_id=api_id, api_hash=api_hash, phone_number=phone)
@bot.on_message(filters.text)
async def command_handler(client: Client, message: Message):
if re.match(pattern, message.text):
await client.send_message(
chat_id=message.chat.id,
text="reply",
message_thread_id=message.message_thread_id
)
await bot.stop()
bot.run()
My requirements.txt
file:
python-decouple==3.8
tgcrypto
kurigram
1 Answer 1
pre-compiled regex
Your complaint is that response latency is high. It's unclear how many milliseconds "high" is, and what the target value after refactoring would be.
need to speed up its reaction, any tips?
No, not really. This is simple vanilla code, not doing anything ambitious.
pattern = r'^Request'
...
async def command_handler( ... ):
if re.match(pattern, message.text):
We're doing slightly more work at request time than
absolutely necessary.
There's an opportunity to assign pattern = re.compile( ... )
,
and then test if pattern.match(message.text):
.
But I expect that's going to be insignificant,
not enough to get you to your (unstated) target latency.
Pre-compiling tends to be more interesting for a regex that is
- complex, and
- within a hot loop.
But that doesn't describe the current situation.
BTW, kudos on ^
anchoring the regex,
so even a very long message.text cannot
cause the .match() to do a lot of work.
warm the cache
Some statements, like import pandas
, take a "long" time
to execute the first time out, and thereafter are
instantaneous due to a cache hit.
It's conceivable that the first .send_message()
call
issues a new import
or does similar work,
and that subsequent message sends happen quicker.
Write an instrumented test to measure the initial
and subsequent latencies, so we can prove or disprove
this hypothesis.
Consider warming the cache by sending an unprompted
"I'm alive!" message at startup,
to current channel or perhaps to some unmonitored dev-null channel.
-
\$\begingroup\$ Even using just
r"Request"
will be instant (less than 1 ms) on the Telegram message due to its length limit, so there's no point looking at user code here:) \$\endgroup\$STerliakov– STerliakov2025年07月15日 15:49:46 +00:00Commented Jul 15 at 15:49 -
\$\begingroup\$ wow, warmin up cache helped me. Every first message in listener slower than subsequent ones \$\endgroup\$voipp– voipp2025年07月16日 16:50:15 +00:00Commented Jul 16 at 16:50
-
\$\begingroup\$ I measured latencies by calculating messages from chat with client.get_chat_history. So i calculate diff between my message and bot reply. Every first message is slower for 1 second. Subsequent messages and replies somehow have equal message time \$\endgroup\$voipp– voipp2025年07月16日 16:56:11 +00:00Commented Jul 16 at 16:56
pyrogram
internals. Since that library is built with performance in mind, I suspect you're either expecting some unreasonable response times (you won't achieve tens of milliseconds, telegram API is the limiting factor) or running the bot on a slow network connection. Anyway, provide your desired metrics and current metrics, profile first, and exclude first message after initialization from your profiling data (because python and pyrogram need some time to import and prepare). \$\endgroup\$