Home › Forums › Link & Grow › Scrapper API not working?
- This topic has 5 replies, 3 voices, and was last updated 3 years, 11 months ago by dreadpixel.
Scrapper API not working?
-
-
I’m a new user of Link and Grow. I installed it and can run the program without a hitch, aside from very low volumes of leads being picked up.
I’d like to use my own backconnect proxies, but I have not seen any directions for this nor had success updating the database to include them. Instead, I chose to use Scrapper API and found their pricing to be very fair.
Following the video tutorial, I should be set-up: the Scrapper API is option is “on” and the API key is entered. I still have an abysmally low volume of leads, so I checked my usage of the API: 0.
In the logs, unlike in the video tutorial, there’s no indication that it’s using the Scrapper API nor building the URLs as it should. Strangely, the ScapperAPI config option defaulted to “on” in my fresh installation, which, if it were truly attempting to use Scrapper API would have probably generated a malformed URL or an error. I suppose a simple check to the API field (API != null) would be enough for a fallback, but it’s clear that it’s not attempting to use the API at all.
Additionally, probably unrelated, I get persistent “MANUAL VERSION IS OFF; GETTING API KEY FROM SERVER.” and “No Proxy in database.” notifications in the log. This may or may not be accompanied by low result counts, so I suspect it’s irrelevant. That API key bit seems like it might be relevant, but… which API?
Thanks for the help.
-
Hello…
The main idea with setting Scraper API by default was simplifying the operation to regular people. That’s why we created the manual mode which you can activate on the Config tab.We used to accept a proxy list from users so it rotated and operated the whole project but after simplifying we deactivate that part. But as you are suggesting we will reactivate so you add your own proxies and operate the whole SERP scraping with your own pool.
If you want we can debug together over a Group Livestream, make sure you join the Facebook Group and the Discord Server so we can talk and go over.
Discord: https://discord.gg/YsV5Jta6DJ
Facebook: https://www.facebook.com/groups/growthrivenetI can do live coding session with you to debug and find the issues together.
-
Thanks for the info, that explains a lot about the proxies. Thank you also for the invite, I’ll probably take you up on that. So far, I’m under the impression that this is an excellent product and I’m already counting the people I can refer.
That said, why isn’t my scraper API being used? I got results from the program while it was “on” without an API key and it still seems to running without using the API key. I’d like to use my API key (5000 free queries a month), though if I’m piggybacking on your account I suppose I have nothing to complain about. I’m not seeing the queries formed in the log (perhaps by design if I’m using your key?), so I don’t know if I have all of the results that I could get with rotating proxies.
I.e., is the automatic mode using your back connect proxies? Since I’m not seeing if I get timeouts, errors, captchas, etc, is it possible I’d get longer lists using my backconnect proxies or scraper API?
-
I have the same issue. I already put the scraper API key but it is not being used. Are there other settings I should do? I just set the “scraperapi_key” to the API key, and “scraperapi” to “on”
-
@audiobreather I will check this on the next live coding session.
Just need to send a newsletter with a link so they can tell me an hour they can so I get to know availability. Or maybe just improvise.You can use our own ScraperAPI built-in token. So don’t worry about piggybacking, we are considering absorbing all these expenses so you all have a good and simple product, that way no proxy scraping or twitching, just pressing buttons and finding profitable keywords + locations.
We will start working next week on the first course for the Cold Emailing method, we will set up the first campaigns for selling the membership. Wait for it.
-
-
@betterbe Don’t worry about using our API. The only issue is the threads on the Plan that sometimes it will do failed requests. The workaround is just retrying but that is not the point.
There might be a lack of logic on the system, I have detected now 3 bugs from users reports. This is exciting because people giving hard use are detecting things that I didn’t think of fully.
These calls we will do for realtime coding will be managed over the Livestream, I already created a Trello for managing a roadmap https://trello.com/invite/b/NWk67oQN/70bad0d322b1e9e03a6280ff1f0346aa/link-grow-roadmap
This will give us all insights into what needs to be worked on and what is more important to attack.I’m starting this journey and I am up to dedicate many hours per week to sell itself, recording failures, success, and methods that I will be discovering along the process.
Automatic mode is on by default on Link&Grow, but I need to add more logic from options and do it more user-friendly. The guy that helps me with PyQT5 is coming back in a couple of weeks, so the GUI will need to wait. In the meantime working on many more things.
-
You must be logged in to reply to this topic.