Hitting problems with the UI

In my game i wanted each ‘screen’ in the NetLogo to be a different rendering, hence i stored room interconnects in an array (as previously posted). I’ve not been able to find a way to make this array work in the same method I would for python.

As I have a better understanding of the Netlogo tiles/patches, I could use something like the ‘pacman’ maze, but make the necessary changes to meet my gaming requirements.

As such I stated with the level editor to make a basic map to see if my understanding would work.

pacman map editor basic form

I was able to give settings for px/pycor and patch size to get a really nice ‘map’ for the pac-man character to work around in.

adjust px,pycor and patch size results in a good size gaming area

I can use the netlogo turtle shapes editor to make my own monsters and hero.

netlogo shapes editor

Therefore, i will create my hero and monsters and then start building apon the logic in pacman code. I want my character to have attributes and also to have to collect objects, much like the classic ‘Atic Atac’ game on the ZX Spectrum. I’ll also create a ‘special’ monster (a dragon), that will have ‘health’ protecting a special exit key.

Classic Spectrum game ‘Atic Atak’

I’ll use sliders and ticks to make ‘health’ and ‘time’

The biggest coding challenge will be making a ‘weapon’ as pacman doesnt have a weapon as such, but creating a key and then a procedure should be less complex than my original plan of using a multi-room array to present each screen.

Sizing the View

Using the wolf\sheep simulation as a bench mark I wanted to correct the size of view to make it clearer and game-friendly. NetLogo has a Icon editor that populate one patch. If the icons look to small, the game will be unplayable.

To understand how to make the view look bigger Tutorial 1 covers how the world is sized. As with logo, the center square is 0,0. We can setup the world to be any size height or width. Each location, or patch, is defined by a size also, so the apparent size of each square can be made larger or smaller.

NetLogo Tutorial 1 – Controlling the view

I wanted to make the icons of the wolf/sheep simulation bigger, so they look like a ‘game’ size icon on my screen. First I loaded the wolf/sheep simulation with the default values and ran setup.

Wolf/Sheep Model – original settings

Whilst I can see that these are wolf and sheep, the icons still seem a little small for my liking. I can see that there world view is form of 25 patches in each direction and the patch size is 10. So I can fit allot of patches into a small space, but not allot of detail of the icons/turtles themselves.

Settings for world size and patch size

If I reduce the world to 10 and the patch size to 44, i effectively reduce the size of the world by half, and increase the size of patches by 4.

adjustment to the world size and patch size

This results in being able to see much clearer detail of the wolf and sheep turtles and each tile, making a better gaming experience in the NetLogo world that has been created.

The view with updated world height and patches sizes

What is brilliant about this is that to the developer using the Netlogo UI, there is no change in the logic code that drives the netlogo simulation, of course there has been changes which can be seen when editing the the wolfsheep.nlogo file, but these are created by the setup dialogue.

This is a very effective way of making the world scalable and useable in a well designed development user interface.

Developing the setup routine

As in python, using a function is a better way to reuse code and keep code ‘DRY’. I moved the array routine into its own function which is called from setup.

..

I saved this and explored the pac-man game, looking at the global variables and how these would map to my own maze dragon game.

globals [
  level         ;; current level
  score         ;; your score
  lives         ;; remaining lives
  extra-lives   ;; total number of extra lives you've won
  scared        ;; time until ghosts aren't scared (0 means not scared)
  level-over?   ;; true when a level is complete
  dead?         ;; true when Pac-Man is loses a life
  next-bonus-in ;; time until next bonus is created
  tool which-ghost ;; variables needed to properly load levels 4 and above.
]

So examining the list to start with, i would need levels, but i would have rooms, and score, lives would be useful. There are no levels as such, once you have completed the adventure the score and time it took to complete is significant. At the moment I’m not doing the AI for the opponents, so ‘scared’ and ‘level-over’ are also not applicable, as well as bonuses and ghost-speicic globals.

My initial list of globals started as

globals [
  lives ;; amount of lives the player has
  health ;; the health of the player
  score ;; players score
  room ;; the room the player is in
]

and I have moved the array-creation to dungeon-map-array

to setup
 dungeon-map-array
end

to dungeon-map-array

I now have the global variables to start creating the rendering procedures for my maps

Dragon Adventure Game – Maze building

For my dragon adventure game, i need to build a ‘maze’. The NetLogo screen itself is an ok size, but if i want playable characters suitable for a video game, i need to use more space.

Firstly, i designed my simple maze on pen and paper.

the initial maze design – a 4×4 grid

As you can see I started with a 4×4 grid counting from 1, but the NetLogo array function counts from 0, so i redid the map starting from 0. This becomes significant when building the array.

array 1, counting from 1 and slightly unordered, array , more sequential and from 0

So having completed the array for the maze and the interconnecting rooms, I explored how to code this in NetLogo.

Firstly, I found some pseudo code, this time based on python as I have programming experience of it

;map = {'corridor':['room1','room2'],'room1':['corridor'],'room2':['corridor'] - python method

So in the python, it defines two rooms and corridor, with how they are connected. This is great for a text based adventure, i want to keep it simpler and not really name the rooms, just from my physical map know which rooms are, so reduced that to numbers.

I started with following the array example from the NetLogo dictionary, adjusting for the amount of rooms i needed (16).

let a array:from-list n-values 16 [0] ; init array a with 16 values of 0

So my room ‘0’ connects to room ‘1’, and room ‘1’ connects to ‘0, 5, 2’

  array:set a 0 "1"
  array:set a 1 "0,5,2"

As I am testing, i want to see the value of the array in the console as well.

print a

To make this work in NetLogo i need to create the ‘setup’ button and assign to a routine, so i create the ‘setup’ button from the ‘add’ item menu, and give it the routine ‘setup’

add the setup button

I then wrap the array creation in a the standard function, not forgetting to add the array extension.

extensions [array]

to setup
  let a array:from-list n-values 16 [0] ; init array a with 16 values of 0
  array:set a 0 "1"
  array:set a 1 "0,5,2"
  print a
end

I can then run the setup procedure, which will output the array

values in array represent connections between room

So I can see that my array is getting populated, i will be able to use the values in an index to draw the relevant rooms.

The completed array

extensions [array]

to setup
  let dungeon array:from-list n-values 16 [0] ; init array a with 16 values of 0
  ; use this line to copy and paste from 
  ; array:set dungeon n ""
  array:set dungeon 0 "1"
  array:set dungeon 1 "0,5,2"
  array:set dungeon 2 "1,3"
  array:set dungeon 3 "2"
  array:set dungeon 4 "8"
  array:set dungeon 5 "1,6"
  array:set dungeon 6 "5,10,7"
  array:set dungeon 7 "6,11"
  array:set dungeon 8 "4,9"
  array:set dungeon 9 "8"
  array:set dungeon 10 "6"
  array:set dungeon 11 "7,13"
  array:set dungeon 12 "13"
  array:set dungeon 13 "12,14"
  array:set dungeon 14 "13,15"
  array:set dungeon 15 "11,14"
  print dungeon
end

Confirmation output

this was helpful as i could spot the typo on 11.14

I spotted I had already made a typo on room 15, so I corrected that from 11.14 to 11,15 and my ‘dungeon’ map array was complete !

Tutorial #3: Procedures walk-thru

In this post I describe following the Tutorial 3 form the NetLogo User Manual.

it all starts here !

Having followed the previous 2 tutorials, which walk thru existing code to gain familiarity with the NetLogo User Interface (UI) this tutorial starts with a completely blank interface. The tutorial takes us thru adding various buttons, sliders and graphs with the accompanying code, as well as the turtle behaviour.

start here.. setup

Adding elements to the UI is really simple – the ‘Add’ menu option has various components that can be added tothe UI, in this case the ‘setup’ button. The UI helpfully tells us that this button wont work/do anything by being in red, this is due to the absence of code to support the action.

By selecting code, I can easily assign actions to a button with the ‘to … end’ method used within Netlogo

to setup
  clear-all
  create-turtles 100 [ setxy random-xcor random-ycor ]
  reset-ticks
end


This iterates for all the controls that will be added.

The iteratve loop for the sheep program is the ‘go’ button code.

to go
  if ticks >= 500 [ stop ]
  move-turtles
  eat-grass
  reproduce
  check-death
  regrow-grass
  tick
end

In this way our world takes shape with the sheep expending energy to move and reproduce, and eventually dieing when energy is expended. The sheep interact with the ‘world’ by the way of the netlogo ’tiles’ when they random move and encounter a green tile, energy increases. Grass only regrows after a number of turns, so the balance between sheep, movement, birth, deaths and grass growing is exhibited.

to eat-grass
  ask turtles [
    if pcolor = green [
      set pcolor black
      set energy (energy + energy-from-grass)
    ]
    ifelse show-energy?
    [ set label energy ]
    [ set label "" ]
  ]
end

The tutorial takes you from blank canvas to sheep environment quickly

Conclusion

I found the three tutorials all very well written and easy to follow, I was quickly able to grasp how to add buttons, counters, toggles and sliders, and the association between these and the code.

It is still early in the development practices, but I think a section on comments and why ‘globals’ are declared at the start of the program would of been useful. I think too many devleopers/programming overlook the importance of commenting code.

Fractal Frenzy !

In this NetLogo application I explore the computational performance of my own computer and NetLogo to use random walk theory to render a mandlebrot.

The computer in use is a mac running os-x bigsur, which Unix based on the BSD kernel, so optimal for threading and process execution. The CPU is 3.8GHz 8 core I-8 with 128Gb of DDR4 memory. The GPU takes the edge of the OSX Windowing environment in which applications such as NetLogo are presented.

Computer specification
Mandlebrot rendered by Netlogo ‘turtlebots’

Initial rendering takes the form of setting the amount of turtles which will b eused, the ‘throw’ which is the distance the turtles will travel on their path and the scale factor, which effectively translate to zooming in or out.

The AI being used by the turtles here is the unique way in which a mandlebrot is generated. As the turtles ‘explore’ their random path, they encounter the spaces around them, whereby if the point is already ‘occupied’ the colour changes. In this way with many turtles and many random paths based on the Mandlebrot equation, the familar pattern is rendered.

initial mandelbrot setup screen of 1000 turtles

After approximantely one minute, and with show turtles, the familar mandlebrot pattern emerges. The halo effect is largely complete and the turtles are now working on further iterations of ‘colour climbing’ the currently occupied spaces.

The beauty of the mandlebrot series is that it is endless and the ability to scale it, by adjusting the scale-factor will render the mandlebrot zoomed in, even tho the turtles are following a random path. Here the speed of ticks is set to fastest and showing the turtles as the render completes.

3 minutes of render time with show turtles

Netlogo was consuming alot of processing time, but not alot of the overall resources as can be seen from this ‘top’ output.

Processes: 683 total, 4 running, 1 stuck, 678 sleeping, 2916 threads 03:13:50
Load Avg: 3.55, 2.78, 2.31 CPU usage: 8.19% user, 8.35% sys, 83.45% idle SharedLibs: 726M resident, 110M data, 158M linkedit.
MemRegions: 235899 total, 17G resident, 440M private, 4369M shared. PhysMem: 54G used (6694M wired), 74G unused.
VM: 5424G vsize, 2318M framework vsize, 0(0) swapins, 0(0) swapouts. Networks: packets: 47177662/46G in, 150461034/114G out.
Disks: 5121776/98G read, 3273115/69G written.

PID COMMAND %CPU TIME #TH #WQ #PORTS MEM PURG CMPR PGRP PPID STATE BOOSTS %CPU_ME %CPU_OTHRS UID FAULTS COW MSGSENT
25364 NetLogo 105.6 08:11.81 45/1 2 378- 894M+ 16K 0B 25364 1 running *0[140] 0.38790 0.40022 501 365252+ 770 2011504+

For the next set of rending and further zoom, i remove the turtles. In my own previous experiments using Mandlebrots in Python 2.x and rendering engine in Python, removing the turtles speeded things up considerably.

hide turtles = faster render time

As such the time to render the fractal was decreased but not massively, a similar amount of resources was used by the system.

Processes: 664 total, 3 running, 661 sleeping, 2862 threads 03:26:12
Load Avg: 3.71, 3.06, 2.71 CPU usage: 8.21% user, 1.24% sys, 90.54% idle SharedLibs: 726M resident, 110M data, 158M linkedit.
MemRegions: 232388 total, 16G resident, 433M private, 4370M shared. PhysMem: 54G used (6360M wired), 74G unused.
VM: 5335G vsize, 2318M framework vsize, 0(0) swapins, 0(0) swapouts. Networks: packets: 47238047/46G in, 150536161/114G out.
Disks: 5123227/98G read, 3288798/70G written.

PID COMMAND %CPU TIME #TH #WQ #PORTS MEM PURG CMPR PGRP PPID STATE BOOSTS %CPU_ME %CPU_OTHRS UID FAULTS COW MSGSENT
25364 NetLogo 108.8 20:33.76 46/1 3 378 778M 16K 0B 25364 1 running *0[298+] 0.49873 0.64401 501 491848+ 770 247550

Conclusion

In this Netlogo simulation i set out to see the grapical capabilities of NetLogo (2d) and my own computer. It can be clearly demonstrated that the computational power and resources of the computer are more than adequate to run complex calcuations and rendering, giving the possiblity that I could include various levels of complex rendered worlds in my own NetLogo application.

Pacman AI – flee and seek

In this post I explore the Netlogo PacMan game, focusing on interaction between player (pacman) and ghosts.

Pacman is a classic 1980’s video game which demonstrates attract and flee AI. The objective of the player is to eat dots and progress to the next level, whilst optionally eating ‘power up’ pills to eat or keep the opponent ghosts at bay or eat them for extra points. Random ‘bonus’ items also appear to increase the score. As the game progresses the layout progresses between 5 maps increasing in complexity and the speed of the ghosts increases making navigating and consuming the dots more challenging.

pac-man / ghost encounter without pill = died

As can be seen, where pacman encounters a ghost without a power pill, he is killed. The results in the level restarting, minus any already consumed pills. The ghost return to their center square home and pacman to his standard ‘spawning’ point on the maze map.

new pacman, ghosts return, dots still in place

The ghosts to start with exhibit a random path, but as soon as the line of sight between pacman and ghost on the maze is established, the ghosts exhibit the AI behaviour of ‘seek’

ghosts see pacman and are attracted to him giving chase !

to choose-heading ;; Ghosts Procedure
let dirs clear-headings
let new-dirs remove opposite heading dirs
let pacman-dir false

if length dirs = 1
[ set heading item 0 dirs ]
if length dirs = 2
[ ifelse see-pacman item 0 dirs
[ set pacman-dir item 0 dirs ]
[ ifelse see-pacman item 1 dirs
[ set pacman-dir item 1 dirs ]
[ set heading one-of new-dirs ]

gHOSTS ROUTINE TO SEEK PACMAN

This can result in an untimely ending to pacman if his escape around the maze is not done from the pursuing ghosts. A strategy is to allow the ghosts to see pacman, but in range of a power pill, whereby pacman will then be able to consume the ghosts and score more points. The ghosts exhibit the AI ‘flee’ behaviour.

power pill consumed ghosts flee !

The interaction of player and pacman is straightforward enough, press a key and move, with some animation. Edge detection is used to determine the walls between pacman and the maze.

Conclusion

In this classic 80’s game we can observe classic seek and flee behaviours of the AI. The AI also seems to loose interest in chasing pacman if the ghost is outwitted long enough. Whilst a simple game, its no doubt that the hunt and chase of ghosts still makes compelling game play even in todays modern era of far more graphically superior games.

Climate Change AI World Creation & Interaction

Climate Change – Initial Environment – no clouds/ no Co2

The Climate Change simulation sets up an AI ‘world’ of atmosphere, earth and space to which is then populated with CO2 particles. Whilst there is much discussion about ‘ideal world temperature’ Freedman (2018) that a warmer world and rate of change is not good for us a species, therefore in this simulation controlling the rate of change and observing the outcomes of increased CO2 & Clouds on the Global Temperature.

After several minutes, the simulation kept a steady temperature with nominal increases and decreases between 26 and 27.9 celsusis.

no cloud/additional CO2 environment

The User Interface allowed the acceleration making observation of fluctuation in the temperature easier The observation of the amount of sunrays (yellow dashes) turning into trapped infrared heat (red dots) with this configuration.

The world creation, formed of earth surface (pink), surface (green), atmosphere (blue) and space (black) are static consistent parts of the AI created with the ‘setup-world’ routine, demonstrating NetLogos simplicity but effectiveness in creating a world.

to setup-world
set sky-top max-pycor – 5
set earth-top 0
ask patches [ ;; set colors for the different sections of the world
if pycor > sky-top [ ;; space
set pcolor scale-color white pycor 22 15
]
if pycor <= sky-top and pycor > earth-top [ ;; sky
set pcolor scale-color blue pycor -20 20
]
if pycor < earth-top
[ set pcolor red + 3 ] ;; earth
if pycor = earth-top ;; earth surface
[ update-albedo ]
]
end

To demonstrate the AI in the simulation, the user takes on the role of a ‘director’ modifying the environment to create changes in the simulation. In this case, more clouds and intesinity of sun-brightness is increased.

adding clouds to the environment

It can be observerd the AI simulation is correctly interacting with the clouds as the sun’s rays bounce off the clouds, thus not being abosrebed into earth and returned into atmosphere to increase tempreture. This creates a temporary drop in tempreture which then starts again to increase.

We can see the AI interacting by identifying that it knows between the interaction of clouds and earth, in that when the rays hit the clouds, they bounce off right away, where as if they hit the earth, some are trapped and others refelected.

This is done thru the run-sunshine,reflect-rays-from-clouds and encounter earth routines.

to run-sunshine
ask rays [
if not can-move? 0.3 [ die ] ;; kill them off at the edge
fd 0.3 ;; otherwise keep moving
]
create-sunshine ;; start new sun rays from top
reflect-rays-from-clouds ;; check for reflection off clouds
encounter-earth ;; check for reflection off earth and absorption
end

clouds create greater variance in a shorter time

It can be seen that the simulation shows increase of additional clouds and increased sun-brightness results in a higher frequency of climate change, which would generally be detrimental to us a species.

Conclusion

In this exploration of NetLogo and the Climate Change simulation I set out to explore how a world was created and interactions of components in it, with the AI exhibiting the absorption, reflection and escape of solar rays. The world can be seen to be quite a simple setup and the interaction between elements equally so, providing a useful demonstration of global warming simulation.

Game AI Reviews

In this weeks posting I review two different games with differing AI.

I’m using quite a minimal setup on the Playstation, the headset and camera were purchased for this exercise.

Equipment Used

PS4 with Standard Controllers. No modifications and using a Sony Braviria TV (1080p) for audio and visual. Connection is via HDMI, no noticeable lag/artifacts

Purchased for the Alien Isolation game as to get a more immersive feel from an auditory response. Also has microphone.

Allows scanning of player during gameplay. For Alien game can be used to ‘look around’ things.

Alien Isolation

Alien Isolation play walk thru (only 30 minutes start)

Release in October of 2014, this game is showing its age visually, whilst the in-game videos and stills are very nice, the rendered humans are showing this game came from 2014 when expectations of visual representations of people/objects wasn’t as high as todays games, even on the same system.

The reason to choose this game however wasnt the visuals, is that the games AI received positive reviews, so on that basis alone wanted to see how good it was.

I am wearing the headset so there is no in-game audio captured in the video, but it provided an important part of the immersive feel.

Things I am looking for specifically are

  • Does the game make me feel as if I’m ‘in’ the game, is it immersive ?
  • Are interactions with objects realistic
  • Do other simulated people/creatures exhibit ‘intelligent’ behaviours

To start with the music and visuals very much reminded me of the orignal ‘Alien’ film, with the ‘gritty’ visuals and atmospheric sounds, it was clearly the ‘Alien’ universe I was being brought into. As the game starts there is alot of ‘fill’ to give the game context, useful once and easily skipped over. Whilst alot of time has been spent on making the transfer from hard-drive to PS4 memory, some of the menus, particualy the ‘save game’ completely returned you to a ‘standard’ PS4 like shell, which removed me from the game-play completely.

The headset was a worth additon to this game, the sound and direction from where the sound came really added to atmosphere, things ‘clanking’, wires ‘sparking’ and lamps ‘blinking’ gave a a spatial representation, adding depth to the game.

Whillst it is easy to criticise 7 year old graphics, some of the visuals, like diffiused light and smoke, were very good, giving a sense of ‘being there’, esp as the player coughed on entering a smokey environment.

  • interactions with objects

Any interactions using hands was done via the controller, and only when an option appeared to do so. Things I would ordinarially pick up and move out the way could only be done by ‘walking’ at them whereby they would move.

Items which could hurt the player, such as an loose electrical cord, were well represented and presented a challenge in having to ‘dodge’ them.

Other things, such as generator handles, light switches and picking up found things/constructing them was all managed by the controller. This felt a little inhibitive and not realy natural gameplay

  • People/Creatures behaviours

Early in the game we interact with several other humans, mostly they are giving monologues as an introducton/feedback. In one scene another AI human report to the bridge of the ship, the path the AI takes is very much predetermined, even tho my player character tried to block/move in the path, the AI took no evasive action, it was more akin to watching in a film than playing with a character within it.

When i walk around and do ‘odd’ things like crouchign next to another person, they make no comment or do anything about it as I was doing nothing.

In the 30 minutes I played, whilst I could hear the Alien clanking around, I had no interaction with it. Maybe with more time or a harder level, I would of seen more of the Alien.

  • Summary

This is clearly a game created several years ago even compared to todays PS4 games. The AI of people is basic and ‘programmed’ with no real sense my actions are being captured and acted apon.

I did feel some degree of immersion thanks to the visuals and sounds, in particular in scence in tunnels/air vents which is not something I would like to be in, I did get a sense of being tense, the sounds/clanking and visuals did make me feel as I was ‘there’.

I didnt see the camera being used at all, and i couldnt really ‘look around things’ as the game had promised. I do think that the microphone on the headset was being used to monitor me though as ‘clanking’ of what I think is the ‘Alien’ in the air ducts increased as I made more noise.

For me the game took too long to unfold, it probably is a good game but takes along time to play to get all the benefits from it, however for a game with ‘Alien’ in the title and no real visual/seeing an Alien in 30 minutes of game play, thats a bit disappointing.

F1 2020 Seventy Edition

Stunning visuals and gaming feedback even on a basic setup

Release in July of 2020, this is a more recent PS4 game. The PS5 is already relleased when this game was launched, but the programmers have done everything they can to get the most from the aging PS4 hardware.

Things I am looking for specifically are

  • Is it Immersive, do i feel like I’m racing a car
  • Do other cars demonstrate racing behaviours

To start with you can see this is a newer PS4 game, the menus are very slick, but seemingly quite complex to navigate around. I’m sure with more time I would find it easier to get to a point where I can start the game from loading and be racing a car. There is alot of menus to navigate and some nice visuals of the circuit which look pretty decent.

  • How good is the immersion ?

Once in the car and racing on the grid the visuals and sound immediatly take over. My television is over 10 years old now, but of decent quality nethertheless, producing 1080p visual and good speakers within it. The sounds of the car and passing thru bridges was captured very well. When ever I hit a curb/gravel the controller rumbled, this was a neat feature and made me feel as I was driving the car ! Obviously the lack of gravity when breaking is something a game cannot do, but I did find myself using a ‘Senna’ style feathering of the break peadle to good effect (this is like doing ABS without an ABS system, but keeping switching between throttle and break very quickly).

  • Do other cars demonstrate racing behaviours

I started off high up the grid and (3rd) and on the track (Spa) which has a very tight first curve gave the PS4 plenty of opputnity to demonstrate its AI capability. Mostly it seemed the cars were more interested in an ‘avoidence’ algorthym, rather than ‘holding a line’, whilst bumping into cars provided the desired efefct of damage and cars spinning all other the place in real time, that is more the physics of the world, whilst very well renderd, doesnt show the intelligence of the simulated drivers.

It can be seen as I work my way up the grid the other cars offer little to no resistence. A racing car should make itself ‘wide’ by moving left to right, even by a few inches, to make it difficult to pass, but the cars with AI drivers didnt exhibit this behaviour, it was far too easy to ove take.

Once I was in the front no one really attacked me, I should feel anxious and need to defend my position, but the other cars didnt take ‘dives’ into corners or try to get into the ‘tow’ of the car to slingshot around.

  • Summary

The racing game was very immersive. If you put a big fan in front of me, put a 10kg weight on my chest and threw grit/oil combo I could believe i was atually racing a real car, as is from the comfort of my living room on a computer screen, this has to be one of the best racing games I’ve played in terms of the actual game play.

The cars AI at the level I was at seemed very basic, maybe on higher levels the cars do take on more ‘personality’ as only basic level so car/racing behaviour was exhibited.

Summary of the games

These two very different games from different times both showed vairous levels of AI in their gaming engines. I was more disappointed by the Alien game not having an Alien I could see even with 30 minutes, but did feel immersed from the ‘world’ that was created for me to play in. The AI of characters was poor due to the limited expressions/reactions to my behaviours.

The racing game was very immersive, and visually upto the standards of 2020/21 gameplay, even on older hardware. The vibation of the controller, use of advanced spatial sound and amazingly detailed rendered graphics was excellent. I will play this game some more in the hope that hte cars exhibit more ‘racing’ competitive behaviours, but it does demonstrate the levels of interaction/physics of the car/racing world very well already.