After many attempts of finding someone to help me updated the crawler, I have finally got someone and the new updated working version is here.
Download Windows, Mac or Linux
Remember you need Java installed for Mac (how to run on mac guide) & Linux
Whats in the new update?
The last real update was 10 years ago, and since then the web and technology has changed quite a bit. So what has been updated is mostly the core technology behind the crawler to make it work. Some features which I think are pointless now like Google Drive upload have been removed. And some other features which broke with new websites have been removed but will come back in a future update.
The only new feature is the ability to see recently crawled websites and that it’ll now it will use 1GB of memory instead of only 250mb (computers have gotten much more powerful in 10 years).
Want to help?
There will be errors and must have features based on new SEO issues/rules of the last 10 years. So let me know in the comments of this post.
Also feel free to post a review or like us on Alternative.to here
Why did it take so long?
I first discontinued Beam Us Up because I was starting my own company in a unrelated field and didn’t have time to maintain and run it (especially since its free and makes no money). Then when I wanted to update it due to conflicts around the world I was no longer able to work with the person I originally made it with.
Although I own a software company we do not program in Java which is what Beam Us Up crawler is made with. I tried to get several people but none were suitable until recently where I found some one suitable and also have decided to make something more of Beam Us Up. The desktop crawler will remain free and have no paid version.
When is the next update?
Wow aren’t you excited! The next update will be within the next few months and what is planned is proper support for sitemaps and robots.txt. As well as a fundamental change in the architecture of the software it’ll look prettier. Plus I am guessing bugs found by you and us until then.
Support?
For now will be provided in the comments section of the appropriate release.
Ralf
26 . Feb . 2024Hey William,
has been a while but great news. Thanks a lot for the update!
Will test it as soon as possible for feedback.
All the best for your other endeavors! 🙂
Ralf
G
26 . Feb . 2024Thanks Ralf I appreciate it 🙂
Mers
26 . Feb . 2024It’s been such a long time I can’t recall how this app works 😀 I’ve gone through it again and realized it’s a Screaming Frog killer. Nice job bringing it back to life after such a long time.
G
26 . Feb . 2024Thanks for saying that although I am pretty sure screaming frog is better 😉 But it isn’t free soo. Also since then I’ve noticed screaming frog and most other tools have taken our idea of having pre made filters with errors so that’s nice to see 🙂
Derluck
05 . Mar . 2024Awesome have liked on Alternative.to
Rich B
05 . Mar . 2024Support Request: Thanks for bringing this back! I do think 1GB of memory is a bit much, however, this pinned my CPU to 99% and froze the application. I think some memory, buffering, or CPU throttling needs to be adjusted. I do have a large website, after scanning 10,000 items, it froze. Windows Server 2016, Intel Xeon 2.4GHz 6GB RAM.
G
07 . Mar . 2024Hey Rich,
We’ll look into it. I’ve been managing to get to around 80k. Perhaps easiest solution is we allow users to set how many pages they want crawled.
Thanks.
Waldo
25 . Mar . 2024Fantastic piece of software! Just 22 MB vs 576 MB for ‘seo spider’!
I run it today and got the impression that it treats links to the same page as links to different pages if both versions the relative – ../folder/page.html and the absolute – https://website.com/folder/page.html are present on the website.
I noticed that cause it gave me too many errors on my website. Once I removed the 2 dots in front of the slash in the relative links (edited ../folder and made it /folder) the errors disappeared.
Thank you!
G
27 . Mar . 2024Thanks Waldo, that issue has been fixed for the new version which has HUGEEEE updates and will be out next month.
Martin
17 . Sep . 2024When I try to scan a new domain by clicking Start, it doesn’t work and the screen stays on Pause – Stop.
What could be the problem?
G
04 . Oct . 2024Hi Martin, can you make it work by restarting the application? Which websites are they?
Thanks
JohnA
25 . Jan . 2025I’m a newbie at this while SEO thing. Very confused with Google’s SEO Analysis tool. Tried BeamUsUp and it made everything a ton easier to figure out. Cleaned up almost all of my warnings.
But, unfortunately, I still have a few hundred warnings “Canonical Same”. I have not been able to figure out what this means or how I should fix it. My website is statically genned and each page content and page slug/url is unique, no duplicates as far as I know. The canonical link is unique across all of the pages as well.
Sorry, could not find any doc or anything on the google SEO side that gave me a hint on what to do next. TIA!
G
25 . Jan . 2025Hi John,
So canonical same means that the page it found sets the canonical (basically the page which should be corret) as itself. Which is not a bad thing unless they are wrong. So check the links it says are same and if those are the pages that you think they should be then its correct. The issue would be if they are not the pages you think they should be. Canonical different also doesn’t mean its a bad thing for example if your site is http://www.example.com/somepage?=73733636 that part in ?=928282 could be a certain feature that opened up a part or did something on a page. Which would mean if it was different it would be fine unless you didn’t want those pages to be linked at at all. Essentially canoncial is just telling the search engine this page is the correct one that it should be.
Hope that helps feel free to send me the URL and I’ll take a look.
Thanks.
John A
26 . Jan . 2025Re: few hundred warnings “Canonical Same”
“Which is not a bad thing unless they are wrong.”
I see. So it is reported in Blue because they may be wrong?
> Hope that helps feel free to send me the URL
Yes it did, thanks! URL is ******