I do not wanna make a bot to automate buying and selling. (which in hindsight now that i think of it explains why they even have the item_nameid.)īut if this is not the case then feel free to explain in details why you need that data then maybe there might be a solution.īasically this a learning project about web scrapping. There is a reason why there is no api or way to query multiple of items at once, as it is clearly a way for valve to prevent automation. The question is what are you making, if not that? because if it is that then we should writing about it here is not so wise because this goes against the SSA/TOS as you are not allowed to automate market transactions. This is the kind of data one would want if they were to build a market bot that buys and sell for small margin profit. Originally posted by MalikQayum:I don't want to assume the worst but this just points in one direction. And without spending on proxies i dont think 2-3k requests are possible ![]() The script almost all setup only problem is number of requests because i want fresh data for 2-3k items every day atleast. Only difference is i put it sleep if i get a network error or return status other than 200 and try again after a minute. Yes i am almost doing it the same way you had mentioned here and the tasks are also separated. Seems pretty straightforward to me, i do not work in python but this probably won't even take much time to set up. I can fetch the item_nameid from my own tbl and add it to the page i want the histogram to display the data on. Now i have the data i want to build my page with. Then i would have a second cronjob just for add item_nameid that it scraps from each page and adds it in the right column. which would be done by the cronjob that handles that. Originally posted by MalikQayum:the concept of what i wanted you to do is separate each task.įirst you would create an entire db of the item itself. I was hoping to make like 2-3k request in a day for what i am doing since histogram can change quite a lot even in few days But even after that i will have to scrap for days to get the details for required items. Which mean 11 days for building database. Next use the market hash for each item to get xhr request from that item page which look something like thisĪnd extract name id from this to build the database. Only useful thing through this is that you can get items withing specific range, their minimum price and market_hash for the item. You dont get item_nameid through this nor the buy orders and histogram. The problem with this is that it only scraps information from market listing pages. I am already using this to get list of all items withing specific price range. You might want to create a detection or just use the first thing we did and add new items we haven't seen before to the tbl and the second cronjob should then eventually get to that item. When done, you have a base and all you then really need to do is keep adding new items to the db tbl, as cs items gets added to the game. ![]() Which would 60 x 24 = 1440 a day which makes this 15000 / 1440 = around = 11 days. My suggestion would be to create a separate cronjob just for that and have the script do a request every 30 sec, maybe even 1 min. ![]() On the topic of item_nameid, there is no way around having to go through each items page. So create a cronjob to do it once every 1 min and you can have it in 15 min or so. ![]() Which makes this so much easier as we would only need to do 100 / 15000 = 150 requests and only do those request once. The max count seems to be 100 so we can can change it to that.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |