I suggest you first consider your coordinate systems. There are two.

**Field Coordinate Axis**

Field boundary corners are in field coordinates (for example): { (-50.0, -35.0, 0), (50.0, -35.0, 0), (-50.0, -35.0, 0), (50.0, -35.0, 0) }, all values in meters.

At any moment in time the camera in the robot: is at (x, y, z) and oriented relative to north by angle theta, measured clockwise when looking from above the field. The value of z may be 2.0 (for example).

**Image Coordinate Axis**

The coordinate axes of the camera images is (w, h).
You have frames in time (perhaps every 33 msec containing grids in a the w-h coordinate axis with 1080 x 960 pixels (for example) providing an index range (<0, 1079>, <0, 959>).

**Maintaining Orientation of Short Robots (small Z)**

You are correct that the Harris feature detection may not work because z (the distance from the surface of the field to the center of the camera lens) may not be sufficient for that algorithm unless the robot is near a corner. The rectangle of the field boundary is not at all rectangular in the camera's w-h focal plane. For the same reasons, finding lines and then locating their intersections is not the optimal approach either.

Pretend you are the robot. As the robot survey's the field, it can assemble a model of the 360 degree periphery. What it sees is a gradually curved line with three upside down v shapes representing the field corners. Unless the robot is almost on top of one of the corners, all four features that correspond to the corners of the field boundary will only vaguely appear to be corners at all.

**Mathematics of Obtuse Corner Detection**

Two tangent lines stem from each corner. They intersect at a discontinuity of the line's derivative, dw/dh, the slope in 2D phase space of the camera frame. The angle found between these two tangent lines will usually be closer to 175 degrees than 90 degrees, yet they are still detectable because the rest of line has no other like discontinuities of slope. From a Fourier transform perspective, the 360 degree line is actually a periodic waveform primarily comprised of the 4th, 12th, 20th, 28th, and 36th harmonics. If you are good with that level of mathematics and you record past frames, you can exploit Fourier's series and FFTs for high accuracy in corner detection.

As you develop your theory and your software, you may find that other aspects of play need to be considered. It may be best to think of those aspects now. Fortunately, if another player or official blocks a portion of the field's boundary line, it will create a discontinuity in the line itself, but not the slope of the line in the w-h plane of the camera's image. Your implementation will need to differentiate those two types of differences, which is hardly an insurmountable problem. Discontinuity in a line and discontinuity in its derivative are mathematically distinct naturally.

**Redundancy in Feedback Channels**

If the robot can sense its location and orientation in other ways and know x, y, z, and theta above with some degree of reliability, the expected location of the obtuse angles and the detected ones can be compared to determine the probability that the robot is properly detecting is orientation.

**Questions in This Context**

In this context the questions you listed need some reorientation.

**How do I know where the lines intersect?**

The line has two edges that may lie on the same pixel in many cases, so that is not easy to detect in an image with many other lines. If the line is of a particular color, hue detection can assist in line detection. If the above corroborative data analysis is employed, then misinterpretation of edges can be corrected quickly in real time. Once the lines are found, the detection of dh/dw at any given point on the line can be estimated using linear regression of segments and windowing (looking at short segments one at a time). When an otherwise relatively stable slope quickly shifts 5 or 10 degrees in angle between windows, you have a high probability that you've found a distant field corner. A shift in 70 to 80 degrees combined with a lower h value in the frame is indicative of a corner in close proximity.

**How do I find the angles of the lines using computer vision?**

Edge detection, systematic elimination of candidate edges that are not likely field boundaries, and then linear regression of the best candidates.

**How do I update this information based on my coordinates?**

Just save them in an appropriate array of x, y, z, theta vectors, indexed by frame number. You will probably want to keep track of what you think your robot's x, y, z, and theta values are and constantly test your assumptions against your most recent inputs. Otherwise, your robot can become disoriented. The more ways you can detect location and orientation, the higher reliability you will have in the overall system. If your vision can detect some feature at each goal that will not change during the game, it may help. Ultimately your x, y, z, and theta are the parameters in a model and the use of gradient descent and auto-correlation and other auto-correction techniques need to be applied to keep your robot's orientation model continuously updated.

**Recommend Diving Into the Math First**

The 3D trig to work all of the above out in detail is initially daunting but not that far beyond high school trig if the researcher develops some clear diagrams first and then takes the time to resurrect any rusty mathematics skills or hone some new ones.