4
\$\begingroup\$

I have a telegram userbot, monitoring certain group topic and answering certain messages. I want it to be faster than human, now it's surprisingly not. But I need to speed up its reaction, any tips?

from pyrogram import Client, filters
from decouple import config
from pyrogram.types import Message
import re
api_id = config('API_ID')
api_hash = config('API_HASH')
phone = config('PHONE')
login = config('LOGIN')
pattern = r'^Request'
bot = Client(name=login, api_id=api_id, api_hash=api_hash, phone_number=phone)
@bot.on_message(filters.text)
async def command_handler(client: Client, message: Message):
 if re.match(pattern, message.text):
 await client.send_message(
 chat_id=message.chat.id,
 text="reply",
 message_thread_id=message.message_thread_id
 )
 await bot.stop()
bot.run()

My requirements.txt file:

python-decouple==3.8
tgcrypto
kurigram
toolic
14.5k5 gold badges29 silver badges203 bronze badges
asked Jul 15 at 7:22
\$\endgroup\$
3
  • \$\begingroup\$ You're complaining the response time is "slow", but it's unclear what the magnitude of the delay is. Rather than having a human type in "Request foo", write a short program that types in "Request timestamp [hh:mm:ss.fff]" so we have millisecond resolution. Then instrument the OP code to log a timestamp immediately upon invocation, and also upon starting and completing the .send_message() \$\endgroup\$ Commented Jul 15 at 15:33
  • \$\begingroup\$ This program essentially does nothing, all the time spent in user code is spent in pyrogram internals. Since that library is built with performance in mind, I suspect you're either expecting some unreasonable response times (you won't achieve tens of milliseconds, telegram API is the limiting factor) or running the bot on a slow network connection. Anyway, provide your desired metrics and current metrics, profile first, and exclude first message after initialization from your profiling data (because python and pyrogram need some time to import and prepare). \$\endgroup\$ Commented Jul 15 at 15:48
  • \$\begingroup\$ @STerliakov how to measure correctly latency? \$\endgroup\$ Commented Jul 16 at 17:01

1 Answer 1

2
\$\begingroup\$

pre-compiled regex

Your complaint is that response latency is high. It's unclear how many milliseconds "high" is, and what the target value after refactoring would be.

need to speed up its reaction, any tips?

No, not really. This is simple vanilla code, not doing anything ambitious.

pattern = r'^Request'
...
async def command_handler( ... ):
 if re.match(pattern, message.text):

We're doing slightly more work at request time than absolutely necessary. There's an opportunity to assign pattern = re.compile( ... ), and then test if pattern.match(message.text):. But I expect that's going to be insignificant, not enough to get you to your (unstated) target latency. Pre-compiling tends to be more interesting for a regex that is

  • complex, and
  • within a hot loop.

But that doesn't describe the current situation.

BTW, kudos on ^ anchoring the regex, so even a very long message.text cannot cause the .match() to do a lot of work.

warm the cache

Some statements, like import pandas, take a "long" time to execute the first time out, and thereafter are instantaneous due to a cache hit.

It's conceivable that the first .send_message() call issues a new import or does similar work, and that subsequent message sends happen quicker. Write an instrumented test to measure the initial and subsequent latencies, so we can prove or disprove this hypothesis. Consider warming the cache by sending an unprompted "I'm alive!" message at startup, to current channel or perhaps to some unmonitored dev-null channel.

answered Jul 15 at 15:47
\$\endgroup\$
3
  • \$\begingroup\$ Even using just r"Request" will be instant (less than 1 ms) on the Telegram message due to its length limit, so there's no point looking at user code here:) \$\endgroup\$ Commented Jul 15 at 15:49
  • \$\begingroup\$ wow, warmin up cache helped me. Every first message in listener slower than subsequent ones \$\endgroup\$ Commented Jul 16 at 16:50
  • \$\begingroup\$ I measured latencies by calculating messages from chat with client.get_chat_history. So i calculate diff between my message and bot reply. Every first message is slower for 1 second. Subsequent messages and replies somehow have equal message time \$\endgroup\$ Commented Jul 16 at 16:56

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.