## Point A to B Avoidance

4

I understand A* and Dijkstra for avoiding obstacles, they require that points are traversable there are points that are not traversable thus the algorithms wont bump into the obstacles because the obstacles cant be traversed or maybe if cost is a factor the relationships between the point is of such a high weight that the algos wont take that path. ive been using graphs this way with good results so A* and Dijkstra are great candidates and I have them working sweet, but they always require me to have points in place to traverse, im the one who puts the points in and creates the relationships between the points, its not ai or any type of learning, its just an algo traversing points on a map.

Lets say I have a white image, a green blob in the middle and points at either side of the blob, I need to get from a -> b I don't have points to traverse I just have this image, is it a case of machine learning to learn an agent to get from point a -> b then apply that learning to more complex maps? if so what what should I be looking at? or is that the wrong route to take pardon the pun. If any of my google queries contain "ai" all I get back is deep mind this and deep mind that and a lot of game developer answers that include sending rays out in front etc, but again that's not ai or learning.

### Edit after answers were posted:

Ok thanks for the responses, I don't have 50 reputation so I cant answer to your posts Both answers seem to come back to graphs though as well as the paper referenced in the first answer which is what im using just now. Ok so Images is definitely out as your right its a super amount of work and would be very complicated. Ill try and explain it in a different way, take this image of a route created by my graph, the route is fine, it adheres to directional traffic separation schemes and is perfectly usable, zoomed out it looks a bit jaggy but zoomed in the route is fine

Are you saying there is no way an agent could be trained to navigate around a map going from a -> b without the use of graphs? In the image above the underlying dataset takes into account ocean points (low resolution), areas where there are many island (high resolution), canals, tss, port approaches and harbour navigation etc so there is a lot of under lying data and your route is only as good as the data you put in, theres also a lot of other concerns especially in the ocean portions where you don't wants to just connect to your neighbour, you want your route to take longer jumps.

If you had an array 640 millions point for latitude longitude 3 decimal precision with high values for land and low values for water, could an agent be trained to go from a -> b? by keeping the agent on water and if it crashes into land then do the simulation again but learn from its mistakes?

I know its a long shot but im just trying to get a handle on whats being done or if there is anything out there to look at.

Great responses thanks.

0

Lets say I have a white image, a green blob in the middle and points at either side of the blob, I need to get from a -> b I don't have points to traverse I just have this image

If your image is as "simple" as you describe here, with very easily distinguishable colours, the easiest solution would likely be to construct a graph as expected by algorithms such as A* as follows:

• Every pixel becomes a node in the graph, with connections to all adjacent pixels
• Green pixels (the green blob in the middle) are marked as non-traversable (or simply don't have connections to adjacent pixels) (I assume this thing is your obstacle).
• Pixels belonging to the points on either side of the obstacle are marked as starting and goal vertices.

Then, you have the graph that a traditional pathfinding algorithm like A* would expect, and can simply run the algorithm there.

If you get less "clear" images (i.e. not brightly coloured, easily distinguishable things, but more like real-life top-down images of an area), then the above won't really work anymore. In such cases, you'd likely want to look into Reinforcement Learning approaches, where you give an agent a reward when it successfully manages to find a connection between start and goal. In particular, you'd need Deep Reinforcement Learning approaches, because otherwise you're not going to be able to handle complex images.

Note that these approaches are not going to be simple to implement or immediately understand for 100% if you're a beginner. There won't be any "beginner-friendly" approaches for the problem you describe, it simply is quite a complex problem (requiring a mix of "decision-making" and "understanding" images).