Spawning multiple task...

new BookmarkLockedFalling
ezmoney
Junior Member
**

ezmoney Avatar

Posts: 69

Post by ezmoney on Mar 28, 2013 10:08:08 GMT -5

I was wondering if anybody had worked out the code
to spawn multiple task either thru RB or a batch file generation.

Thus this becomes a parallel task generator.

Maybe there is another way....

Most everything is searial in RB and awaits for the return from
any function.... ::)

It has got me puzzeled how to do this... ???

I think the batch file runs independant of RB thus that might
be the way to go... :D

Post your comments and any bright ideas on how this can be done.... 8-)

Also I wanted to be able to geneate cron jobs and know nothing
about that.... :P

... Another question is can I create a random access file on the web?

Hope I didn't get to far afield on this... ;)

PS: a quick find from the web...
link.springer.com/content/pdf/10.1007/BF01939357
Abstract
A parallel algorithm for generating all combinations of m out of n items in lexicographic order is presented. The algorithm uses m processors and runs in O(nCm) time. The cost of the algorithm, which is the parallel running time multiplied by the number of processors used, is optimal to within a constant multiplicative factor in view of the Ù(ncm*m) lower bound on the number of operations required to solve this problem using a sequential computer.

Thats hot stuff... :o

roguelantern
New Member
*

roguelantern Avatar

Posts: 15

Post by roguelantern on Mar 29, 2013 13:07:13 GMT -5

For parallel tasks, you might be able to run multiple instances of the server, every one in a different port. I have not tested this, but why not. You could then start tasks with links pointing through different ports to different Runbasic servers.

For cron jobs, you could use a javascript timer, I guess. There was an example of this in the Runbasic wiki (wikispaces).

If I recall correctly, random access files are also described the Runbasic wiki.

Sorry for the terse writing, but I'm in a bit of a hurry ;)
meerkat
Senior Member
****

meerkat Avatar

Posts: 250

Post by meerkat on Mar 29, 2013 16:32:49 GMT -5

Not sure how you want to run stuff, so I'll guess:

There are several ways.
You can put a link on the web page to fire a task:

<html>
<body>
Hello World!
<TABLE BORDER=0 BGCOLOR="steelblue"><TR>
<TD><A HREF="http://www.yourDomain.com:8008/seaside/go/runbasicpersonal?app=yourApp" target="xx">Fire Task</A></TD>
</TR></TABLE>
</body></html>


You could have a project that fires automatically when a web page is displayed. I use this to get web hits of who is running what.

<html><body>
<iframe
src="http://www.yourDomain.com:8008/seaside/go/runbasicpersonal?app=webhits&YourTrackTag" frameborder=no height=1 width=1>
</iframe>
</body></html>


You could fire it with JavaScript in your code.

html "<script src='src="http://www.yourDomain.com:8008/seaside/go/runbasicpersonal?app=yourApp'></script>


Hope this helps.
ezmoney
Junior Member
**

ezmoney Avatar

Posts: 69

Post by ezmoney on Apr 3, 2013 13:16:33 GMT -5

I want to do a sort... but a unique sort.. a quick sort....

en.wikipedia.org/wiki/Quicksort#Simple_version

This quick sort keeps dividing the total array into parts...

Thus each part can become a separate task and can run parallel simultaneously.

So very large array can run very very fast...

All I have to do is define the sort field and pass that to the sort program.

Assuming the program will then divide each subset of the array into a separate task and
spawn the new task. Thus if I have a gazillion elements in the array they all get sorted
and depending on the return I may be able to start the processing of those that sorted already
as the next step.

Now this should take work up to the core limitations or gpgpu cores if that can be implimented.

If the programs are independent task then they each could use the maximum allowed by the
RB limitations.

Thus only the first should be the largest core user all others will subsets of the array being sorted.








meerkat
Senior Member
****

meerkat Avatar

Posts: 250

ezmoney
Junior Member
**

ezmoney Avatar

Posts: 69

Post by ezmoney on Apr 4, 2013 7:44:39 GMT -5

My knowledge mysql is very poor...

Enlighten me and let me know the limitations....

I would assume that the mysql would have to sort by some algorithem.

But sometimes I want to get a copy of the mysql for special statistical work done on the data
so I must download or run from RB and get the data from the sql listing.

Maybe I can generate that in real time too .... hot d**n.. that would be "Fantastic"

Adding,delete and changes to the sql data base entries need to be done real time.

Some accounting and emailing info to customers as their accounts need to be renewed or
updated.

This sql db will be dynamic and change as time goes by.

But must always render the listings for any user.

I am not sure if that is really a good thing as repeated searches of the sql is cpu intensive.

Linear searches I don't think are pratical in the large number of sql entries in the system.
The system would have to search and any subsearch or some way to break the file or store the index for selections could be built into another sql file and updated on any sort, again real time.

Assuming many people will come( I hope) to my sight and look thru the data.

being able to quickly find a listing is important and so is not wasting the resource time and time again for the 1000's of searches.

Either by creating a subset and if someone searches then that creates a subset of the selection.... as long as the sql doesnt change then that subsearch becomes available for the next user to find the same thing someone else already searched for.

If you do a web search for a selection on the web you find a few that are quickly listed, but
you might go deeper and I think google adds to this as those searching add more search criteria, probable something with statisictics as those most searched for websites. Or Some
seo so that website appears most often and you click it Google makes revenue from. These
are called featured websites or ad links...


This farther refinds the search and avoids going thru the intire file.

I hate to waste the servers time continually sorting each time a new element is
added, changed or deleted.

But this has to have a trade off in time saved or subfiles generated.

I'am sure google has all those web sites somewhere in a data base and sorts them on
some key to quickly return them to a user wanting to find a give search key....

I would expect the data base to grow rather large over time and of course perge
as the selections come and go...

This is gettin rather long and involved so I'll cut it off here and go check out mysql...

meerkat
Senior Member
****

meerkat Avatar

Posts: 250

Post by meerkat on Apr 4, 2013 10:22:56 GMT -5


My knowledge mysql is very poor...

Since Run Basic only supports SQLite, I'll stick to that discussion. Granted mySQL is managed it has advantages.


Enlighten me and let me know the limitations....

Biggest limitation of SQLite as implimented in RB is it has a tendancy to lockup when you have very complex queries that update. That is you have two cursors, one reading, and one writing back to the same file. However most databases simply do adds, changes, deletes, lookups and reports,


I would assume that the mysql would have to sort by some algorithem.

SQLite and almost all other DB's do not do a sort in the traditional way. As you add records it has a tree structure that is very efficient and it inserts the keys with pointers to the data into the tree, It's very fast.


But sometimes I want to get a copy of the mysql for special statistical work done on the data
so I must download or run from RB and get the data from the sql listing.

Not sure what stats you are talking about. But normally you create a Query to retrieve the data and do the stats on the fly.


Maybe I can generate that in real time too .... hot d**n.. that would be "Fantastic"

OOPS... Just answered that above.


Adding,delete and changes to the sql data base entries need to be done real time.

That's usually the standard way of maintaining and keeping a DB up to date.


Some accounting and emailing info to customers as their accounts need to be renewed or
updated.

Done all the time..


This sql db will be dynamic and change as time goes by.

The primary reason most people use a sql db.


But must always render the listings for any user.

Standard stuff.


I am not sure if that is really a good thing as repeated searches of the sql is cpu intensive.

If you are constantly hammering a DB in a particular sequence then put a index on that data. For example assume you had a db with the following file called list:
customerNum int(5),
entryDate date,
itemID valchar(10)
notes text
And you constantly need to get a customers information in date and item sequence then you could place a index on customerNum, entryDate, itemId.
And you also constantly need item and customer information you could also place a indes on itemId,customerNum.
Of course you would also want to have tables for customers and items so you can do joins of those files to the list for customer info, and item infor.
And Joins are simple.


Linear searches I don't think are pratical in the large number of sql entries in the system.
The system would have to search and any subsearch or some way to break the file or store the index for selections could be built into another sql file and updated on any sort, again real time.

Well lets assume you have a fairly small file of only a couple million records. If you knew that items belonged to categories, you could put a category in the item file. Then again you may want a category file to break that out further. In any event if you wanted for example category =1 from the list, you could simply do a SQL with the list and item joined as follows:
SELECT * FROM item JOIN item ON category=1, and item.itemId = list.itemId, ordered by customerNum (or whatever)


Assuming many people will come( I hope) to my sight and look thru the data.

being able to quickly find a listing is important and so is not wasting the resource time and time again for the 1000's of searches.

Either by creating a subset and if someone searches then that creates a subset of the selection.... as long as the sql doesnt change then that subsearch becomes available for the next user to find the same thing someone else already searched for.


If you do a web search for a selection on the web you find a few that are quickly listed, but
you might go deeper and I think google adds to this as those searching add more search criteria, probable something with statisictics as those most searched for websites. Or Some
seo so that website appears most often and you click it Google makes revenue from. These
are called featured websites or ad links...


This farther refinds the search and avoids going thru the intire file.

You do not really do searches of large files if you index properly. For esample you should see instant response to find all items for a certain customer for certain item categories, and dates that are on a Friday.


I hate to waste the servers time continually sorting each time a new element is
added, changed or deleted.

Again almost no time is needed it only takes a couple disk seeks (that are normally already cached anyway) to find the proper place to add the index in the tree,


I would expect the data base to grow rather large over time and of course perge
as the selections come and go...

Normally there is usually a interval for purgeing old stuff from the file.
For example you could create a QUERY to delete all dates older than some time once a day/week/month/ year....
DELETE FROM list where entryDate < '2013-03' or something like that.


Actually there is a program that will generate the code to do Adds, Changes, Deletes, Queries, Sorts. You simply give it the db path/name and tell it what file to work with . In this case the list file. Then tell it you want to join the customer and item tables. It generates 90% of what you want. You simply tweak the code to show the listings exactly how you want them and what exactly they can do lookus (drill downs) on and sorts etc.
Go to www.kneware.com/rbp/index.html and click on the rbGen program and download it and run it.

Hope this helps.