I am looking for the best way to bring in data from a website that does not have a API.
I have been able to find URLs that give the data I need in a json format and have them saved in a table in my database.
I am struggling to figure out the best ways to parse, and save the data into my database.
there are tons of ways to scrape a website.
More requirements would be helpful to narrow down the possibilities.
Do you need a one-off scraping and import the data manually?
In this case, there are chrome extensions (Bardeen is good) that allow to configure visually a scraper and save a CSV out of it.
Or do you need an harvester to perform more automatic scraping activities?
Scrapingbee or BrowseAI offer remote browsers you can program.
The website I am working with stores a large amount of data in json and has accessible .json urls. I have never needed a plugin in the past to scrape this sort of data but I am not experienced with any of this by any means.
For saving the response, you'll need to connect a resource to save the data. If you're familiar with SQL & don't already have an API or database in mind for this use case, you might consider using Retool's Database feature.