GET https://rct.dev.bbntimes.com/science/the-journey-of-artificial-intelligence-and-machine-learning

ArticleController :: show

Request

GET Parameters

None

POST Parameters

None

Uploaded Files

None

Request Attributes

Key Value
_controller
"App\Controller\ArticleController::show"
_firewall_context
"security.firewall.map.context.main"
_links
Symfony\Component\WebLink\GenericLinkProvider {#3588
  -links: [
    3709 => Symfony\Component\WebLink\Link {#3709
      -href: "/build/runtime.js"
      -rel: [
        "preload" => "preload"
      ]
      -attributes: [
        "as" => "script"
      ]
    }
    3707 => Symfony\Component\WebLink\Link {#3707
      -href: "/build/644.js"
      -rel: [
        "preload" => "preload"
      ]
      -attributes: [
        "as" => "script"
      ]
    }
    3706 => Symfony\Component\WebLink\Link {#3706
      -href: "/build/502.js"
      -rel: [
        "preload" => "preload"
      ]
      -attributes: [
        "as" => "script"
      ]
    }
    3705 => Symfony\Component\WebLink\Link {#3705
      -href: "/build/app.js"
      -rel: [
        "preload" => "preload"
      ]
      -attributes: [
        "as" => "script"
      ]
    }
    3704 => Symfony\Component\WebLink\Link {#3704
      -href: "/build/view-more.js"
      -rel: [
        "preload" => "preload"
      ]
      -attributes: [
        "as" => "script"
      ]
    }
    3703 => Symfony\Component\WebLink\Link {#3703
      -href: "/build/term-condition.js"
      -rel: [
        "preload" => "preload"
      ]
      -attributes: [
        "as" => "script"
      ]
    }
    3702 => Symfony\Component\WebLink\Link {#3702
      -href: "/build/contact.js"
      -rel: [
        "preload" => "preload"
      ]
      -attributes: [
        "as" => "script"
      ]
    }
    3701 => Symfony\Component\WebLink\Link {#3701
      -href: "/build/scroll-infinite-article.js"
      -rel: [
        "preload" => "preload"
      ]
      -attributes: [
        "as" => "script"
      ]
    }
    3700 => Symfony\Component\WebLink\Link {#3700
      -href: "/build/app.css"
      -rel: [
        "preload" => "preload"
      ]
      -attributes: [
        "as" => "style"
      ]
    }
    3699 => Symfony\Component\WebLink\Link {#3699
      -href: "/build/cookie-style.css"
      -rel: [
        "preload" => "preload"
      ]
      -attributes: [
        "as" => "style"
      ]
    }
    3698 => Symfony\Component\WebLink\Link {#3698
      -href: "/build/term-condition-css.css"
      -rel: [
        "preload" => "preload"
      ]
      -attributes: [
        "as" => "style"
      ]
    }
    3697 => Symfony\Component\WebLink\Link {#3697
      -href: "/build/contact-css.css"
      -rel: [
        "preload" => "preload"
      ]
      -attributes: [
        "as" => "style"
      ]
    }
    3696 => Symfony\Component\WebLink\Link {#3696
      -href: "/build/comment-css.css"
      -rel: [
        "preload" => "preload"
      ]
      -attributes: [
        "as" => "style"
      ]
    }
  ]
}
_route
"article_show"
_route_params
[
  "category" => "science"
  "slug" => "the-journey-of-artificial-intelligence-and-machine-learning"
]
_security_firewall_run
"_security_main"
_stopwatch_token
"9acc36"
category
"science"
slug
"the-journey-of-artificial-intelligence-and-machine-learning"

Request Headers

Header Value
accept
"*/*"
accept-encoding
"gzip, br, zstd, deflate"
connection
"close"
cookie
"PHPSESSID=b35qfquvvo8qq2kjb3ep6e70cu"
host
"rct.dev.bbntimes.com"
user-agent
"Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; ClaudeBot/1.0; +claudebot@anthropic.com)"
x-php-ob-level
"1"

Request Content

Request content not available (it was retrieved as a resource).

Response

Response Headers

Header Value
cache-control
"no-cache, private"
content-type
"text/html; charset=UTF-8"
date
"Sat, 22 Feb 2025 13:11:54 GMT"
link
"</build/runtime.js>; rel="preload"; as="script",</build/644.js>; rel="preload"; as="script",</build/502.js>; rel="preload"; as="script",</build/app.js>; rel="preload"; as="script",</build/view-more.js>; rel="preload"; as="script",</build/term-condition.js>; rel="preload"; as="script",</build/contact.js>; rel="preload"; as="script",</build/scroll-infinite-article.js>; rel="preload"; as="script",</build/app.css>; rel="preload"; as="style",</build/cookie-style.css>; rel="preload"; as="style",</build/term-condition-css.css>; rel="preload"; as="style",</build/contact-css.css>; rel="preload"; as="style",</build/comment-css.css>; rel="preload"; as="style""
x-debug-token
"90db45"

Cookies

Request Cookies

Key Value
PHPSESSID
"b35qfquvvo8qq2kjb3ep6e70cu"

Response Cookies

No response cookies

Session 6

Session Metadata

Key Value
Created
"Sat, 22 Feb 25 13:11:49 +0000"
Last used
"Sat, 22 Feb 25 13:11:52 +0000"
Lifetime
0

Session Attributes

Attribute Value
_csrf/https-comment
"Dou7Kbh3KMNjW_hQt_0DAEC-n0WXltbm4Qc_3WZ1CLI"
_csrf/https-cookie_accept
"Klq32aw0c7UYb20Uv4Tx4cx_c0GG-svZNaoiS3Kl2sg"

Session Usage

6 Usages
Stateless check enabled
Usage
Symfony\Component\Security\Csrf\TokenStorage\SessionTokenStorage:76
[
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/security-csrf/TokenStorage/SessionTokenStorage.php"
    "line" => 76
    "function" => "start"
    "class" => "Symfony\Component\HttpFoundation\Session\Session"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/security-csrf/CsrfTokenManager.php"
    "line" => 69
    "function" => "hasToken"
    "class" => "Symfony\Component\Security\Csrf\TokenStorage\SessionTokenStorage"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/form/Extension/Csrf/Type/FormTypeCsrfExtension.php"
    "line" => 82
    "function" => "getToken"
    "class" => "Symfony\Component\Security\Csrf\CsrfTokenManager"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/form/ResolvedFormType.php"
    "line" => 134
    "function" => "finishView"
    "class" => "Symfony\Component\Form\Extension\Csrf\Type\FormTypeCsrfExtension"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/form/Extension/DataCollector/Proxy/ResolvedTypeDataCollectorProxy.php"
    "line" => 95
    "function" => "finishView"
    "class" => "Symfony\Component\Form\ResolvedFormType"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/form/ResolvedFormType.php"
    "line" => 128
    "function" => "finishView"
    "class" => "Symfony\Component\Form\Extension\DataCollector\Proxy\ResolvedTypeDataCollectorProxy"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/form/Extension/DataCollector/Proxy/ResolvedTypeDataCollectorProxy.php"
    "line" => 95
    "function" => "finishView"
    "class" => "Symfony\Component\Form\ResolvedFormType"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/form/Form.php"
    "line" => 908
    "function" => "finishView"
    "class" => "Symfony\Component\Form\Extension\DataCollector\Proxy\ResolvedTypeDataCollectorProxy"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/src/Controller/ArticleController.php"
    "line" => 220
    "function" => "createView"
    "class" => "Symfony\Component\Form\Form"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/http-kernel/HttpKernel.php"
    "line" => 181
    "function" => "show"
    "class" => "App\Controller\ArticleController"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/http-kernel/HttpKernel.php"
    "line" => 76
    "function" => "handleRaw"
    "class" => "Symfony\Component\HttpKernel\HttpKernel"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/http-kernel/Kernel.php"
    "line" => 197
    "function" => "handle"
    "class" => "Symfony\Component\HttpKernel\HttpKernel"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/runtime/Runner/Symfony/HttpKernelRunner.php"
    "line" => 35
    "function" => "handle"
    "class" => "Symfony\Component\HttpKernel\Kernel"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/autoload_runtime.php"
    "line" => 29
    "function" => "run"
    "class" => "Symfony\Component\Runtime\Runner\Symfony\HttpKernelRunner"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/public/index.php"
    "line" => 5
    "args" => [
      "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/autoload_runtime.php"
    ]
    "function" => "require_once"
  ]
]
Symfony\Component\Security\Csrf\TokenStorage\SessionTokenStorage:79
[
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/security-csrf/TokenStorage/SessionTokenStorage.php"
    "line" => 79
    "function" => "has"
    "class" => "Symfony\Component\HttpFoundation\Session\Session"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/security-csrf/CsrfTokenManager.php"
    "line" => 69
    "function" => "hasToken"
    "class" => "Symfony\Component\Security\Csrf\TokenStorage\SessionTokenStorage"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/form/Extension/Csrf/Type/FormTypeCsrfExtension.php"
    "line" => 82
    "function" => "getToken"
    "class" => "Symfony\Component\Security\Csrf\CsrfTokenManager"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/form/ResolvedFormType.php"
    "line" => 134
    "function" => "finishView"
    "class" => "Symfony\Component\Form\Extension\Csrf\Type\FormTypeCsrfExtension"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/form/Extension/DataCollector/Proxy/ResolvedTypeDataCollectorProxy.php"
    "line" => 95
    "function" => "finishView"
    "class" => "Symfony\Component\Form\ResolvedFormType"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/form/ResolvedFormType.php"
    "line" => 128
    "function" => "finishView"
    "class" => "Symfony\Component\Form\Extension\DataCollector\Proxy\ResolvedTypeDataCollectorProxy"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/form/Extension/DataCollector/Proxy/ResolvedTypeDataCollectorProxy.php"
    "line" => 95
    "function" => "finishView"
    "class" => "Symfony\Component\Form\ResolvedFormType"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/form/Form.php"
    "line" => 908
    "function" => "finishView"
    "class" => "Symfony\Component\Form\Extension\DataCollector\Proxy\ResolvedTypeDataCollectorProxy"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/src/Controller/ArticleController.php"
    "line" => 220
    "function" => "createView"
    "class" => "Symfony\Component\Form\Form"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/http-kernel/HttpKernel.php"
    "line" => 181
    "function" => "show"
    "class" => "App\Controller\ArticleController"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/http-kernel/HttpKernel.php"
    "line" => 76
    "function" => "handleRaw"
    "class" => "Symfony\Component\HttpKernel\HttpKernel"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/http-kernel/Kernel.php"
    "line" => 197
    "function" => "handle"
    "class" => "Symfony\Component\HttpKernel\HttpKernel"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/runtime/Runner/Symfony/HttpKernelRunner.php"
    "line" => 35
    "function" => "handle"
    "class" => "Symfony\Component\HttpKernel\Kernel"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/autoload_runtime.php"
    "line" => 29
    "function" => "run"
    "class" => "Symfony\Component\Runtime\Runner\Symfony\HttpKernelRunner"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/public/index.php"
    "line" => 5
    "args" => [
      "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/autoload_runtime.php"
    ]
    "function" => "require_once"
  ]
]
Symfony\Component\Security\Csrf\TokenStorage\SessionTokenStorage:52
[
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/security-csrf/TokenStorage/SessionTokenStorage.php"
    "line" => 52
    "function" => "has"
    "class" => "Symfony\Component\HttpFoundation\Session\Session"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/security-csrf/CsrfTokenManager.php"
    "line" => 70
    "function" => "getToken"
    "class" => "Symfony\Component\Security\Csrf\TokenStorage\SessionTokenStorage"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/form/Extension/Csrf/Type/FormTypeCsrfExtension.php"
    "line" => 82
    "function" => "getToken"
    "class" => "Symfony\Component\Security\Csrf\CsrfTokenManager"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/form/ResolvedFormType.php"
    "line" => 134
    "function" => "finishView"
    "class" => "Symfony\Component\Form\Extension\Csrf\Type\FormTypeCsrfExtension"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/form/Extension/DataCollector/Proxy/ResolvedTypeDataCollectorProxy.php"
    "line" => 95
    "function" => "finishView"
    "class" => "Symfony\Component\Form\ResolvedFormType"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/form/ResolvedFormType.php"
    "line" => 128
    "function" => "finishView"
    "class" => "Symfony\Component\Form\Extension\DataCollector\Proxy\ResolvedTypeDataCollectorProxy"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/form/Extension/DataCollector/Proxy/ResolvedTypeDataCollectorProxy.php"
    "line" => 95
    "function" => "finishView"
    "class" => "Symfony\Component\Form\ResolvedFormType"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/form/Form.php"
    "line" => 908
    "function" => "finishView"
    "class" => "Symfony\Component\Form\Extension\DataCollector\Proxy\ResolvedTypeDataCollectorProxy"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/src/Controller/ArticleController.php"
    "line" => 220
    "function" => "createView"
    "class" => "Symfony\Component\Form\Form"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/http-kernel/HttpKernel.php"
    "line" => 181
    "function" => "show"
    "class" => "App\Controller\ArticleController"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/http-kernel/HttpKernel.php"
    "line" => 76
    "function" => "handleRaw"
    "class" => "Symfony\Component\HttpKernel\HttpKernel"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/http-kernel/Kernel.php"
    "line" => 197
    "function" => "handle"
    "class" => "Symfony\Component\HttpKernel\HttpKernel"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/runtime/Runner/Symfony/HttpKernelRunner.php"
    "line" => 35
    "function" => "handle"
    "class" => "Symfony\Component\HttpKernel\Kernel"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/autoload_runtime.php"
    "line" => 29
    "function" => "run"
    "class" => "Symfony\Component\Runtime\Runner\Symfony\HttpKernelRunner"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/public/index.php"
    "line" => 5
    "args" => [
      "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/autoload_runtime.php"
    ]
    "function" => "require_once"
  ]
]
Symfony\Component\Security\Csrf\TokenStorage\SessionTokenStorage:56
[
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/security-csrf/TokenStorage/SessionTokenStorage.php"
    "line" => 56
    "function" => "get"
    "class" => "Symfony\Component\HttpFoundation\Session\Session"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/security-csrf/CsrfTokenManager.php"
    "line" => 70
    "function" => "getToken"
    "class" => "Symfony\Component\Security\Csrf\TokenStorage\SessionTokenStorage"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/form/Extension/Csrf/Type/FormTypeCsrfExtension.php"
    "line" => 82
    "function" => "getToken"
    "class" => "Symfony\Component\Security\Csrf\CsrfTokenManager"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/form/ResolvedFormType.php"
    "line" => 134
    "function" => "finishView"
    "class" => "Symfony\Component\Form\Extension\Csrf\Type\FormTypeCsrfExtension"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/form/Extension/DataCollector/Proxy/ResolvedTypeDataCollectorProxy.php"
    "line" => 95
    "function" => "finishView"
    "class" => "Symfony\Component\Form\ResolvedFormType"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/form/ResolvedFormType.php"
    "line" => 128
    "function" => "finishView"
    "class" => "Symfony\Component\Form\Extension\DataCollector\Proxy\ResolvedTypeDataCollectorProxy"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/form/Extension/DataCollector/Proxy/ResolvedTypeDataCollectorProxy.php"
    "line" => 95
    "function" => "finishView"
    "class" => "Symfony\Component\Form\ResolvedFormType"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/form/Form.php"
    "line" => 908
    "function" => "finishView"
    "class" => "Symfony\Component\Form\Extension\DataCollector\Proxy\ResolvedTypeDataCollectorProxy"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/src/Controller/ArticleController.php"
    "line" => 220
    "function" => "createView"
    "class" => "Symfony\Component\Form\Form"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/http-kernel/HttpKernel.php"
    "line" => 181
    "function" => "show"
    "class" => "App\Controller\ArticleController"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/http-kernel/HttpKernel.php"
    "line" => 76
    "function" => "handleRaw"
    "class" => "Symfony\Component\HttpKernel\HttpKernel"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/http-kernel/Kernel.php"
    "line" => 197
    "function" => "handle"
    "class" => "Symfony\Component\HttpKernel\HttpKernel"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/runtime/Runner/Symfony/HttpKernelRunner.php"
    "line" => 35
    "function" => "handle"
    "class" => "Symfony\Component\HttpKernel\Kernel"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/autoload_runtime.php"
    "line" => 29
    "function" => "run"
    "class" => "Symfony\Component\Runtime\Runner\Symfony\HttpKernelRunner"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/public/index.php"
    "line" => 5
    "args" => [
      "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/autoload_runtime.php"
    ]
    "function" => "require_once"
  ]
]
Symfony\Component\Security\Core\Authentication\Token\Storage\UsageTrackingTokenStorage:41
[
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/security-core/Authentication/Token/Storage/UsageTrackingTokenStorage.php"
    "line" => 41
    "function" => "getMetadataBag"
    "class" => "Symfony\Component\HttpFoundation\Session\Session"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/twig-bridge/AppVariable.php"
    "line" => 103
    "function" => "getToken"
    "class" => "Symfony\Component\Security\Core\Authentication\Token\Storage\UsageTrackingTokenStorage"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/twig/twig/src/Extension/CoreExtension.php"
    "line" => 1635
    "function" => "getUser"
    "class" => "Symfony\Bridge\Twig\AppVariable"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/var/cache/dev/twig/a4/a456f2f504a18cd81037fa69e543310c.php"
    "line" => 200
    "function" => "twig_get_attribute"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/twig/twig/src/Template.php"
    "line" => 394
    "function" => "doDisplay"
    "class" => "__TwigTemplate_824a013e4f3f4a68e03a2d244e765025"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/twig/twig/src/Template.php"
    "line" => 367
    "function" => "displayWithErrorHandling"
    "class" => "Twig\Template"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/twig/twig/src/Template.php"
    "line" => 379
    "function" => "display"
    "class" => "Twig\Template"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/twig/twig/src/TemplateWrapper.php"
    "line" => 38
    "function" => "render"
    "class" => "Twig\Template"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/twig/twig/src/Environment.php"
    "line" => 280
    "function" => "render"
    "class" => "Twig\TemplateWrapper"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/framework-bundle/Controller/AbstractController.php"
    "line" => 448
    "function" => "render"
    "class" => "Twig\Environment"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/framework-bundle/Controller/AbstractController.php"
    "line" => 453
    "function" => "doRenderView"
    "class" => "Symfony\Bundle\FrameworkBundle\Controller\AbstractController"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/framework-bundle/Controller/AbstractController.php"
    "line" => 253
    "function" => "doRender"
    "class" => "Symfony\Bundle\FrameworkBundle\Controller\AbstractController"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/src/Controller/AppController.php"
    "line" => 126
    "function" => "render"
    "class" => "Symfony\Bundle\FrameworkBundle\Controller\AbstractController"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/http-kernel/HttpKernel.php"
    "line" => 181
    "function" => "renderHeader"
    "class" => "App\Controller\AppController"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/http-kernel/HttpKernel.php"
    "line" => 76
    "function" => "handleRaw"
    "class" => "Symfony\Component\HttpKernel\HttpKernel"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/http-kernel/HttpCache/SubRequestHandler.php"
    "line" => 86
    "function" => "handle"
    "class" => "Symfony\Component\HttpKernel\HttpKernel"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/http-kernel/Fragment/InlineFragmentRenderer.php"
    "line" => 78
    "function" => "handle"
    "class" => "Symfony\Component\HttpKernel\HttpCache\SubRequestHandler"
    "type" => "::"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/http-kernel/Fragment/FragmentHandler.php"
    "line" => 83
    "function" => "render"
    "class" => "Symfony\Component\HttpKernel\Fragment\InlineFragmentRenderer"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/http-kernel/DependencyInjection/LazyLoadingFragmentHandler.php"
    "line" => 47
    "function" => "render"
    "class" => "Symfony\Component\HttpKernel\Fragment\FragmentHandler"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/twig-bridge/Extension/HttpKernelRuntime.php"
    "line" => 44
    "function" => "render"
    "class" => "Symfony\Component\HttpKernel\DependencyInjection\LazyLoadingFragmentHandler"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/var/cache/dev/twig/b9/b9cb140abab7ef4ef8cb398831c75ac0.php"
    "line" => 207
    "function" => "renderFragment"
    "class" => "Symfony\Bridge\Twig\Extension\HttpKernelRuntime"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/twig/twig/src/Template.php"
    "line" => 171
    "function" => "block_header"
    "class" => "__TwigTemplate_dc67cdc305f050f0a27ba7ef152f05af"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/var/cache/dev/twig/b9/b9cb140abab7ef4ef8cb398831c75ac0.php"
    "line" => 91
    "function" => "displayBlock"
    "class" => "Twig\Template"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/twig/twig/src/Template.php"
    "line" => 394
    "function" => "doDisplay"
    "class" => "__TwigTemplate_dc67cdc305f050f0a27ba7ef152f05af"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/twig/twig/src/Template.php"
    "line" => 367
    "function" => "displayWithErrorHandling"
    "class" => "Twig\Template"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/var/cache/dev/twig/c3/c336f4e76fc20e4db84e3be131276b68.php"
    "line" => 52
    "function" => "display"
    "class" => "Twig\Template"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/twig/twig/src/Template.php"
    "line" => 394
    "function" => "doDisplay"
    "class" => "__TwigTemplate_3ce0324a396de697d1fad9fabd68df72"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/twig/twig/src/Template.php"
    "line" => 367
    "function" => "displayWithErrorHandling"
    "class" => "Twig\Template"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/twig/twig/src/Template.php"
    "line" => 379
    "function" => "display"
    "class" => "Twig\Template"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/twig/twig/src/TemplateWrapper.php"
    "line" => 38
    "function" => "render"
    "class" => "Twig\Template"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/twig/twig/src/Environment.php"
    "line" => 280
    "function" => "render"
    "class" => "Twig\TemplateWrapper"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/framework-bundle/Controller/AbstractController.php"
    "line" => 448
    "function" => "render"
    "class" => "Twig\Environment"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/framework-bundle/Controller/AbstractController.php"
    "line" => 453
    "function" => "doRenderView"
    "class" => "Symfony\Bundle\FrameworkBundle\Controller\AbstractController"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/framework-bundle/Controller/AbstractController.php"
    "line" => 253
    "function" => "doRender"
    "class" => "Symfony\Bundle\FrameworkBundle\Controller\AbstractController"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/src/Controller/ArticleController.php"
    "line" => 277
    "function" => "render"
    "class" => "Symfony\Bundle\FrameworkBundle\Controller\AbstractController"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/http-kernel/HttpKernel.php"
    "line" => 181
    "function" => "show"
    "class" => "App\Controller\ArticleController"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/http-kernel/HttpKernel.php"
    "line" => 76
    "function" => "handleRaw"
    "class" => "Symfony\Component\HttpKernel\HttpKernel"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/http-kernel/Kernel.php"
    "line" => 197
    "function" => "handle"
    "class" => "Symfony\Component\HttpKernel\HttpKernel"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/runtime/Runner/Symfony/HttpKernelRunner.php"
    "line" => 35
    "function" => "handle"
    "class" => "Symfony\Component\HttpKernel\Kernel"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/autoload_runtime.php"
    "line" => 29
    "function" => "run"
    "class" => "Symfony\Component\Runtime\Runner\Symfony\HttpKernelRunner"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/public/index.php"
    "line" => 5
    "args" => [
      "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/autoload_runtime.php"
    ]
    "function" => "require_once"
  ]
]
Symfony\Component\Security\Http\Firewall\ContextListener:171
[
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/security-http/Firewall/ContextListener.php"
    "line" => 171
    "function" => "remove"
    "class" => "Symfony\Component\HttpFoundation\Session\Session"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/event-dispatcher/Debug/WrappedListener.php"
    "line" => 116
    "function" => "onKernelResponse"
    "class" => "Symfony\Component\Security\Http\Firewall\ContextListener"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/event-dispatcher/EventDispatcher.php"
    "line" => 220
    "function" => "__invoke"
    "class" => "Symfony\Component\EventDispatcher\Debug\WrappedListener"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/event-dispatcher/EventDispatcher.php"
    "line" => 56
    "function" => "callListeners"
    "class" => "Symfony\Component\EventDispatcher\EventDispatcher"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/event-dispatcher/Debug/TraceableEventDispatcher.php"
    "line" => 139
    "function" => "dispatch"
    "class" => "Symfony\Component\EventDispatcher\EventDispatcher"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/http-kernel/HttpKernel.php"
    "line" => 214
    "function" => "dispatch"
    "class" => "Symfony\Component\EventDispatcher\Debug\TraceableEventDispatcher"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/http-kernel/HttpKernel.php"
    "line" => 202
    "function" => "filterResponse"
    "class" => "Symfony\Component\HttpKernel\HttpKernel"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/http-kernel/HttpKernel.php"
    "line" => 76
    "function" => "handleRaw"
    "class" => "Symfony\Component\HttpKernel\HttpKernel"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/http-kernel/Kernel.php"
    "line" => 197
    "function" => "handle"
    "class" => "Symfony\Component\HttpKernel\HttpKernel"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/symfony/runtime/Runner/Symfony/HttpKernelRunner.php"
    "line" => 35
    "function" => "handle"
    "class" => "Symfony\Component\HttpKernel\Kernel"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/autoload_runtime.php"
    "line" => 29
    "function" => "run"
    "class" => "Symfony\Component\Runtime\Runner\Symfony\HttpKernelRunner"
    "type" => "->"
  ]
  [
    "file" => "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/public/index.php"
    "line" => 5
    "args" => [
      "/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/vendor/autoload_runtime.php"
    ]
    "function" => "require_once"
  ]
]

Flashes

Flashes

No flash messages were created.

Server Parameters

Server Parameters

Defined in .env

Key Value
APP_ENV
"dev"
APP_SECRET
"0a988e63f011514eaabfc650b599af4d"
CORS_ALLOW_ORIGIN
"*"
DATABASE_URL
"mysql://bbndb_rctuser:33F5W25z40or0f7@localhost:3306/rct_bbntimes"
GOOGLE_RECAPTCHA_SECRET_KEY
"6LdV5fgpAAAAANxzTG8ZMfIjil1wu-1vrQvnUt-x"
GOOGLE_RECAPTCHA_SITE_KEY
"6LdV5fgpAAAAAENKcn73MJAhQrbtQeqgyC4wDLMP"
MAILER_DSN
"smtp://no-reply%40rct.dev.bbntimes.com:Bl6%26gLD48%26Of919@rct.dev.bbntimes.com:465"
MARIADB_PASSWORD
"i!87pK&!85ezc8"

Defined as regular env variables

Key Value
APP_DEBUG
"1"
BASE
"/public"
CONTEXT_DOCUMENT_ROOT
"/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com"
CONTEXT_PREFIX
""
DOCUMENT_ROOT
"/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com"
FCGI_ROLE
"RESPONDER"
GATEWAY_INTERFACE
"CGI/1.1"
HTTPS
"on"
HTTP_ACCEPT
"*/*"
HTTP_ACCEPT_ENCODING
"gzip, br, zstd, deflate"
HTTP_CONNECTION
"close"
HTTP_COOKIE
"PHPSESSID=b35qfquvvo8qq2kjb3ep6e70cu"
HTTP_HOST
"rct.dev.bbntimes.com"
HTTP_USER_AGENT
"Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; ClaudeBot/1.0; +claudebot@anthropic.com)"
PASSENGER_COMPILE_NATIVE_SUPPORT_BINARY
"0"
PASSENGER_DOWNLOAD_NATIVE_SUPPORT_BINARY
"0"
PATH
"/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin"
PHP_SELF
"/public/index.php"
PP_CUSTOM_PHP_CGI_INDEX
"plesk-php81-fastcgi"
PP_CUSTOM_PHP_INI
"/var/www/vhosts/system/rct.dev.bbntimes.com/etc/php.ini"
QUERY_STRING
""
REDIRECT_BASE
"/public"
REDIRECT_HTTPS
"on"
REDIRECT_PASSENGER_COMPILE_NATIVE_SUPPORT_BINARY
"0"
REDIRECT_PASSENGER_DOWNLOAD_NATIVE_SUPPORT_BINARY
"0"
REDIRECT_REDIRECT_HTTPS
"on"
REDIRECT_REDIRECT_PASSENGER_COMPILE_NATIVE_SUPPORT_BINARY
"0"
REDIRECT_REDIRECT_PASSENGER_DOWNLOAD_NATIVE_SUPPORT_BINARY
"0"
REDIRECT_REDIRECT_SCRIPT_URI
"https://rct.dev.bbntimes.com/science/the-journey-of-artificial-intelligence-and-machine-learning"
REDIRECT_REDIRECT_SCRIPT_URL
"/science/the-journey-of-artificial-intelligence-and-machine-learning"
REDIRECT_REDIRECT_SSL_TLS_SNI
"rct.dev.bbntimes.com"
REDIRECT_REDIRECT_STATUS
"200"
REDIRECT_REDIRECT_UNIQUE_ID
"Z7nNGLiBoyj8JZ8-HGx9nwAAABI"
REDIRECT_SCRIPT_URI
"https://rct.dev.bbntimes.com/science/the-journey-of-artificial-intelligence-and-machine-learning"
REDIRECT_SCRIPT_URL
"/science/the-journey-of-artificial-intelligence-and-machine-learning"
REDIRECT_SSL_TLS_SNI
"rct.dev.bbntimes.com"
REDIRECT_STATUS
"200"
REDIRECT_UNIQUE_ID
"Z7nNGLiBoyj8JZ8-HGx9nwAAABI"
REDIRECT_URL
"/public/science/the-journey-of-artificial-intelligence-and-machine-learning"
REMOTE_ADDR
"18.117.12.247"
REMOTE_PORT
"47638"
REQUEST_METHOD
"GET"
REQUEST_SCHEME
"https"
REQUEST_TIME
1740229912
REQUEST_TIME_FLOAT
1740229912.1909
REQUEST_URI
"/science/the-journey-of-artificial-intelligence-and-machine-learning"
SCRIPT_FILENAME
"/var/www/vhosts/dev.bbntimes.com/rct.dev.bbntimes.com/public/index.php"
SCRIPT_NAME
"/public/index.php"
SCRIPT_URI
"https://rct.dev.bbntimes.com/science/the-journey-of-artificial-intelligence-and-machine-learning"
SCRIPT_URL
"/science/the-journey-of-artificial-intelligence-and-machine-learning"
SERVER_ADDR
"5.196.1.209"
SERVER_ADMIN
"[no address given]"
SERVER_NAME
"rct.dev.bbntimes.com"
SERVER_PORT
"443"
SERVER_PROTOCOL
"HTTP/1.1"
SERVER_SIGNATURE
"<address>Apache Server at rct.dev.bbntimes.com Port 443</address>\n"
SERVER_SOFTWARE
"Apache"
SSL_TLS_SNI
"rct.dev.bbntimes.com"
SYMFONY_DOTENV_VARS
"APP_ENV,APP_SECRET,DATABASE_URL,MARIADB_PASSWORD,CORS_ALLOW_ORIGIN,MAILER_DSN,GOOGLE_RECAPTCHA_SECRET_KEY,GOOGLE_RECAPTCHA_SITE_KEY"
UNIQUE_ID
"Z7nNGLiBoyj8JZ8-HGx9nwAAABI"

Sub Requests 6

CookieController :: renderAnalysis (token = ec6840)

Key Value
_controller
"App\Controller\CookieController::renderAnalysis"
_format
"html"
_locale
"en"
_stopwatch_token
"f400e0"

AppController :: renderHeader (token = 0fea41)

Key Value
_controller
"App\Controller\AppController::renderHeader"
_format
"html"
_locale
"en"
_stopwatch_token
"d2564c"
slug
"home"

ArticleController :: trendingArticle (token = 4ca051)

Key Value
_controller
"App\Controller\ArticleController::trendingArticle"
_format
"html"
_locale
"en"
_stopwatch_token
"92529b"
category
"Science"

ArticleController :: relatedArticles (token = 180357)

Key Value
_controller
"App\Controller\ArticleController::relatedArticles"
_format
"html"
_locale
"en"
_stopwatch_token
"406677"
current_article
App\Entity\Article {#1094
  -id: 9882
  -title: "The Journey of Artificial Intelligence and Machine Learning"
  -slug: "the-journey-of-artificial-intelligence-and-machine-learning"
  -introtext: "<p><a href="technology/the-latest-trends-in-artificial-intelligence-ai-and-machine-learning-ml" target="_blank" rel="noopener">Artificial intelligence (AI)</a>&nbsp;is increasingly affecting the world around us. It is increasingly making an impact in retail, financial services, along with other sectors of the<a href="technology/will-artificial-superintelligence-asi-create-infinite-economic-growth-or-more-inequality" target="_blank" rel="noopener"> economy</a>.</p>\r\n"
  -content: """
    \r\n
    <p><a href="companies/5-real-time-applications-of-machine-learning" target="_blank" rel="noopener">Applications of machine learning </a>allow for mass personalisation at scale in marketing across different sectors of the economy and improved outcomes for health care by detecting cancer at an earlier stage with medical imaging.</p>\r\n
    <p>AI has undergone a transformation in the past decade, from being a field in research that at some point had previously stagnated to one where it is expected to become the dominant technology of the next decade (and thereafter too). The journey of how AI arrived at its current state has been both a fascinating and at times a difficult journey. We'll start with a refresher of the definitions of AI.</p>\r\n
    <div class="slate-resizable-image-embed slate-image-embed__resize-full-width" data-image-href="http://developer.nvidia.com/deep-learning"><img src="/images/AI_ML_DL_Graph.png" alt="AI_ML_DL_Graph.png" width="901" height="573" /></div>\r\n
    <p><em>Source for image above&nbsp;<a href="https://developer.nvidia.com/deep-learning" target="_blank" rel="nofollow noopener">NVIDIA</a>&nbsp;</em></p>\r\n
    <h2><strong>Definition of Artificial Intelligence (AI)</strong></h2>\r\n
    <p>AI deals with the area of developing computing systems which are capable of performing tasks that humans are very good at, for example recognising objects, recognising and making sense of speech, and decision making in a constrained environment. Classical AI algorithms and approaches included rules-based systems, search algorithms that entailed uninformed search (breadth first, depth first, universal cost search), and informed search such as A and A* algorithms that used a heuristic. A heuristic is used to rank alternatives options based upon the information that is available.&nbsp;<a href="https://www.techopedia.com/definition/5436/heuristic" target="_blank" rel="nofollow noopener">Techopedia</a>&nbsp;explains&nbsp;heuristics as methods that "use available data, rather than predefined solutions, to solve machine and human problems. Heuristical solutions are not necessarily provable or accurate but are usually good enough to solve small-scale issues that are part of a larger problem."</p>\r\n
    <p>The early Classical AI approaches laid a strong foundation for more advanced approaches today that are better suited to large search spaces and big data sets. It also entailed approaches from logic, involving propositional and predicate calculus. Whilst such approaches are suitable for deterministic scenarios, the problems encountered in the real world are often better suited to probabilistic approaches.</p>\r\n
    <p>There are three types of AI:</p>\r\n
    <p><strong>Narrow AI:</strong>&nbsp;the field of AI where the machine is designed to perform a single task and the machine gets very good at performing that particular task. However, once the machine is trained, it does not generalise to unseen domains. This is the form of AI that we have today, for example Google Translate.</p>\r\n
    <p><strong>Artificial General Intelligence (AGI):</strong>&nbsp;a form of AI that can accomplish any intellectual task that a human being can do. It is more conscious and makes decisions similar to the way humans take decisions. AGI remains an aspiration at this moment in time with various forecasts ranging from 2029 to 2049 or even never in terms of its arrival. It may arrive within the next 20 or so years but it has challenges including those relating to hardware, energy consumption required in today’s powerful machines, and a true ability to multitask as humans do.</p>\r\n
    <p><strong>Super Intelligence:</strong>&nbsp;is a form of intelligence that exceeds the performance of humans in all domains (as defined by Nick Bostrom). This refers to aspects like general wisdom, problem solving and creativity.&nbsp;For more on Super Intelligence and the types of AI see article by&nbsp;<a href="https://codebots.com/ai-powered-bots/the-3-types-of-ai-is-the-third-even-possible" target="_blank" rel="nofollow noopener">Mitchell Tweedie</a>.</p>\r\n
    <h2><strong>Definition of Machine Learning</strong></h2>\r\n
    <p>Machine Learning is defined as the field of AI that applies statistical methods to enable computer systems to learn from the data towards an end goal. The types of Machine Learning include Supervised, Unsupervised and Semi-Supervised Learning (Reinforcement Learning is dealt with further below).</p>\r\n
    <ul>\r\n
    <li>Supervised Learning: a learning algorithm that works with data that is labelled (annotated). Supervised Learning Algorithms may use Classification or Numeric Prediction. Classification (Logistic Regression, Decision Tree, KNN, Random Forest, SVM, Naive Bayes, etc), is the process of predicting the class of given data points.&nbsp;for example learning to classify fruits with labelled images of fruits as apple, orange, lemon, etc. Regression algorithms ((Linear Regression, KNN, Gradient Boosting &amp; AdaBoost, etc) are used for the prediction of continuous numerical values.</li>\r\n
    <li>Unsupervised Learning is a learning algorithm to discover patterns hidden in data that is not labelled (annotated). An example is segmenting customers into different clusters. Examples include clustering with K-Means, and pattern discovery. A powerful technique from Deep Learning, known as Generative Adversarial Networks (GANs), uses unsupervised learning and is mentioned in a section below.</li>\r\n
    <li>Semi-Supervised Learning: is a learning algorithm only when a small fraction of the data is labelled. An example is provided by&nbsp;<a href="https://www.datarobot.com/wiki/semi-supervised-machine-learning/" target="_blank" rel="nofollow noopener">DataRobot</a>&nbsp;"When you don’t have enough labeled data to produce an accurate model and you don’t have the ability or resources to get more, you can use semi-supervised techniques to increase the size of your training data. For example, imagine you are developing a model for a large bank intended to detect fraud. Some fraud you know about, but other instances of fraud slipped by without your knowledge. You can label the dataset with the fraud instances you’re aware of, but the rest of your data will remain unlabelled. "</li>\r\n
    </ul>\r\n
    <h2><strong>Definition of Deep Learning (DL)</strong></h2>\r\n
    <p>Artificial Neural Networks<strong>&nbsp;</strong>are biologically inspired networks that extract abstract features from the data in a hierarchical fashion.&nbsp;Deep Learning refers to the field of Neural Networks with several hidden layers. Such a neural network is often referred to as a deep neural network. Much of the AI revolution during this decade has been related to developments linked to Deep Learning as noted by the Economist article "<a href="https://www.economist.com/special-report/2016/06/23/from-not-working-to-neural-networking" target="_blank" rel="nofollow noopener">From not working to neural networking</a>".</p>\r\n
    <h2><strong>The Early Foundations</strong></h2>\r\n
    <p>AI and Machine learning are based upon foundations from Mathematics and Computer Science (CS). Important techniques used within AI &amp; Machine Learning were invented before CS was created.&nbsp;Key examples include the work of Thomas Bayes, which led Pierre-Simon Laplace to define&nbsp;<a href="https://en.wikipedia.org/wiki/Bayes%27_theorem" target="_blank" rel="nofollow noopener">Bayes’ Theorem (1812)</a>.&nbsp;<a href="https://en.wikipedia.org/wiki/Least_squares" target="_blank" rel="nofollow noopener">The least squares method</a>&nbsp;for data fitting from Adrien-Marie Legendre in 1805 is another example. Furthermore, Andrey Markov developed methods that went on to be termed&nbsp;<a href="https://en.wikipedia.org/wiki/Markov_chain" target="_blank" rel="nofollow noopener">Markov Chains (1913)</a>. Moreover, the foundations of first-order logic were developed independently by German Mathematician and Philosopher&nbsp;<a href="https://en.wikipedia.org/wiki/Begriffsschrift" target="_blank" rel="nofollow noopener">Gottlob Frege&nbsp;</a>in 1879&nbsp;and American Philosopher and Mathematician&nbsp;&nbsp;<a href="http://%20https//en.wikipedia.org/wiki/Charles_Sanders_Peirce" target="_blank" rel="nofollow noopener">Charles Saunders Pierce&nbsp;</a>who published articles on the subject between 1867 and 1906.</p>\r\n
    <p>For AI or Machine Learning to exist we need the hardware in the form of computers. In 1936 (and 1938 with a correction) Alan Turning, Cambridge Mathematician produced a paper entitled&nbsp;<a href="https://londmathsoc.onlinelibrary.wiley.com/doi/pdf/10.1112/plms/s2-43.6.544" target="_blank" rel="nofollow noopener"><em>On Computable Numbers, with an Application to the&nbsp;</em>Entscheidungs problem</a>&nbsp;whereby a theoretical machine known as universal computing machine possessed infinite store (memory) with data and instructions. It is termed a Universal Turing Machine today.&nbsp;The 1940s witnessed development of stored program computing with programs held within the same memory utilised for the data. In 1945 the von Neumann architecture (Princeton architecture) was published by John von Neumann in the&nbsp;<a href="https://fa82ee93-a-62cb3a1a-s-sites.googlegroups.com/site/michaeldgodfrey/vonneumann/vnedvac.pdf?attachauth=ANoY7cojQ-EM5dQ6kWO6XJC79KiyPD08V6k-WEQTK9Ia_W51l9hJyps4IlIqDOWJt8JkOzzjNSe2ngjwdRM8LAxkIB803pPfJI7kT3J--PlkyrkZep6VqXDEbj0bDJK2np6rPKGf7s7DR--rlPZfhlaVKqycw21fKdMyeBqwFuQCz_-ZjswHfC-ERuU6F4FCVh5k52fD8xLwECm4qfgtiMiEtr-avNlbtGF59c4jmHAyFwa-QDXeTVw%3D&amp;attredirects=0" target="_blank" rel="nofollow noopener">First Draft of a Report on the EDVAC</a>. It proposed a&nbsp;<a href="https://www.techopedia.com/definition/32480/von-neumann-architecture" target="_blank" rel="nofollow noopener">theoretical design for a stored program computer that serves as the basis for almost all modern computers</a>.</p>\r\n
    <p>Other Key developments in Computing include:</p>\r\n
    <ul>\r\n
    <li>The&nbsp;<a href="https://en.wikipedia.org/wiki/Manchester_Baby" target="_blank" rel="nofollow noopener">Manchester Small-Scale Experimental Machine</a>&nbsp;in 1948;</li>\r\n
    <li><a href="https://en.wikipedia.org/wiki/EDSAC" target="_blank" rel="nofollow noopener">Cambridge’s EDSAC</a>&nbsp;and the&nbsp;<a href="https://en.wikipedia.org/wiki/Manchester_Mark_1" target="_blank" rel="nofollow noopener">Manchester Mark 1</a>&nbsp;in 1949;</li>\r\n
    <li>&nbsp;The&nbsp;<a href="https://en.wikipedia.org/wiki/EDVAC" target="_blank" rel="nofollow noopener">University of Pennsylvania’s EDVAC&nbsp;</a>in 1951.</li>\r\n
    </ul>\r\n
    <p>Contributions to AI and Machine Learning are set out below in a mostly chronological order with some exceptions, for example I placed the more recent Boosting models after the section on the arrival of Boosting approaches, and Evolutionary Genetic Algorithms just above Neuroevolution to allow for a logical connection. The bullet points below provide a non-exhaustive list of contributions until the AI Winters with the later sections covering developments that occurred after the AI Winters:</p>\r\n
    <ul>\r\n
    <li>The Minimax theorem was proven by&nbsp;<a href="https://cs.stanford.edu/people/eroberts/courses/soco/projects/1998-99/game-theory/Minimax.html" target="_blank" rel="nofollow noopener">John von Neumann in 1928.</a>&nbsp;For more on Minimax see the article by&nbsp;<a href="https://www.hackerearth.com/blog/artificial-intelligence/minimax-algorithm-alpha-beta-pruning/" target="_blank" rel="nofollow noopener">Rashmi Jain&nbsp;</a>where it is described as a "recursive algorithm which is used to choose an optimal move for a player assuming that the other player is also playing optimally. It is used in games such as tic-tac-toe, go, chess, checkers, and many other two-player games." Minimax is a strategy of always minimizing the maximum possible loss which can result from a choice that a player makes.</li>\r\n
    <li>In&nbsp;<a href="https://en.wikipedia.org/wiki/Principal_component_analysis" target="_blank" rel="nofollow noopener">1933</a>&nbsp;and&nbsp;<a href="https://www.jstor.org/stable/2333955?origin=crossref&amp;seq=1#page_scan_tab_contents" target="_blank" rel="nofollow noopener">1936&nbsp;</a><a href="https://en.wikipedia.org/wiki/Harold_Hotelling" target="_blank" rel="nofollow noopener">Harold Hotelling</a>&nbsp;independently developed and named Principal Component Analysis (PCA). Albeit it is noted that&nbsp;<a href="https://en.wikipedia.org/wiki/Principal_component_analysis" target="_blank" rel="nofollow noopener">Karl Pearson</a>&nbsp;first pioneered PCA in&nbsp;<a href="https://www.tandfonline.com/doi/abs/10.1080/14786440109462720" target="_blank" rel="nofollow noopener">1901</a>. PCA In simple words, principal component analysis is a method of&nbsp;extracting important variables (in form of components) from a large set of variables available in a data set.</li>\r\n
    <li>The initial steps in developing the perceptron used in Neural Networks occurred in 1943 by&nbsp;<a href="https://towardsdatascience.com/mcculloch-pitts-model-5fdf65ac5dd1" target="_blank" rel="nofollow noopener">McCulloch and Pitts</a>.</li>\r\n
    <li>In 1944<a href="https://papers.tinbergen.nl/02119.pdf" target="_blank" rel="nofollow noopener">&nbsp;Joseph Berkson</a>&nbsp;proposed<a href="https://en.wikipedia.org/wiki/Logistic_regression#Logistic_model" target="_blank" rel="nofollow noopener">&nbsp;Logistic Regression</a>&nbsp;as a general statistical model.</li>\r\n
    <li>The Breadth First Search algorithm was originated by&nbsp;<a href="https://en.wikipedia.org/wiki/Breadth-first_search" target="_blank" rel="nofollow noopener">Knrad Zuse in 1945</a>&nbsp;but not&nbsp;<a href="https://web.archive.org/web/20150326055019/http://www.graph500.org/specifications#sec-5#sec-5" target="_blank" rel="nofollow noopener">published</a>&nbsp;until 1972.</li>\r\n
    <li><a href="https://en.wikipedia.org/wiki/Claude_Shannon" target="_blank" rel="nofollow noopener">Claude Shannon</a>&nbsp;published&nbsp;<a href="https://ieeexplore.ieee.org/document/6773024" target="_blank" rel="nofollow noopener">A mathematical theory of communication</a>&nbsp;In the Bell Sytsem Technical Journal&nbsp;In 1948. For more on Shannon Entropy there is a good overview entitled&nbsp;"<a href="https://medium.com/swlh/shannon-entropy-in-the-context-of-machine-learning-and-ai-24aee2709e32" target="_blank" rel="nofollow noopener">Shannon Entropy in the context of Machine Learning and&nbsp;AI</a>" by Frank Preiswerk that explains how Shannon Entropy is used as a measure for information content of probability distributions.</li>\r\n
    <li>In 1949 Donald Hebb introduced Hebbian Theory as a means to explain associative learning whereby neutron cells simultaneously activating results in notable gains in synaptic strength between cells.&nbsp;<a href="https://en.wikipedia.org/wiki/Hebbian_theory" target="_blank" rel="nofollow noopener">Hebbian Theory&nbsp;</a>is summarised on Wikipedia as "Neurons that fire together, wire together. Neurons that fire out of sync, fail to link...In the study of&nbsp;Neural Networks&nbsp;in cognitive function, it is often regarded as the neuronal basis of Unsupervised Learning."</li>\r\n
    <li>&nbsp;<a href="https://en.wikipedia.org/wiki/Turing_test" target="_blank" rel="nofollow noopener">Alan Turing</a>&nbsp;posed the question (know as the Turning Test) about intelligent machines in 1950 whilst publishing&nbsp;Computing Machinery and Intelligence, in which he asked: “Can machines think?” – a question that we still ask today.&nbsp;&nbsp;Turing developed the notion of the 'imitation game', whereby the person on the other side had to determine whether they were engaging in communication with a person or a computer (an intelligent agent) via typed messages.</li>\r\n
    <li>Marvin Minksy and Dean Edmunds developed the first neural network named&nbsp;<a href="https://en.wikipedia.org/wiki/Stochastic_neural_analog_reinforcement_calculator" target="_blank" rel="nofollow noopener">Stochastic Neural Analog Reinforcement Calculator (SNARC)&nbsp;</a>in 1951. Minsky later went on to play a fundamental role in the development of MIT’s research efforts in Computer Science and AI.&nbsp;</li>\r\n
    <li><a href="https://en.wikipedia.org/wiki/John_McCarthy_(computer_scientist)" target="_blank" rel="nofollow noopener">John McCarthy coined the term Artificial Intelligence</a>&nbsp;in 1955 . The field of AI research was born at a workshop at Dartmouth College in 1956 with Allen Newell (CMU), Herbert Simon (CMU), John McCarthy (MIT), Marvin Minsky (MIT) and Arthur Samuel (IBM) in attendance.</li>\r\n
    <li><a href="http://%20%20%20http//citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.335.3398&amp;rep=rep1&amp;type=pdf" target="_blank" rel="nofollow noopener">Frank Rosenblatt designed Perceptron</a>&nbsp;in 1958. The main goal of this was pattern and shape recognition.&nbsp;However, in a&nbsp;<a href="https://en.wikipedia.org/wiki/Perceptron" target="_blank" rel="nofollow noopener">press conference that the US Navy arranged, the New York Times reported the perceptron to be</a>&nbsp;"the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence." This resulted in heated debate amongst AI researchers at the time and despite the perceptron appearing to show potential, the single layer perceptrons are constrained to learning linearly separable patterns meaning that it could not be trained to recognise many classes of patterns (vs multi-layer perceptrons that were capable of producing an&nbsp;<a href="https://en.wikipedia.org/wiki/Exclusive_or" target="_blank" rel="nofollow noopener">XOR</a>&nbsp;(exclusive Or) and hence capable of dealing with non linearly separable patterns). Claims of exaggeration resulted in a setback for this area of research.&nbsp;</li>\r\n
    <li><a href="https://en.wikipedia.org/wiki/John_McCarthy_(computer_scientist)" target="_blank" rel="nofollow noopener">John McCarthy</a>&nbsp;developed Lisp in 1958 while he was at the&nbsp;<a href="https://en.wikipedia.org/wiki/Massachusetts_Institute_of_Technology" target="_blank" rel="nofollow noopener">Massachusetts Institute of Technology</a>&nbsp;(MIT). With the language attaining the status from those in common usage&nbsp;&nbsp;of second oldest high level programming language&nbsp;&nbsp;today.&nbsp;The design was published in 1960 in a paper entitled&nbsp;<a href="https://en.wikipedia.org/wiki/Lisp_(programming_language)" target="_blank" rel="nofollow noopener">"Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I"</a>.</li>\r\n
    <li>In 1959 Arthur Samuel created the name Machine Learning whilst he was at IBM when publishing&nbsp;<a href="https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.368.2254" target="_blank" rel="nofollow noopener">Some studies in machine learning using the game of Checkers</a>. “&nbsp;Machine learning&nbsp;is the subfield of computer science that gives computers the ability to&nbsp;learn&nbsp;without being programmed.” — Arthur Samuel, 1959.</li>\r\n
    <li><a href="https://brilliant.org/wiki/dijkstras-short-path-finder/" target="_blank" rel="nofollow noopener">Dijkstra’s shortest path algorithm</a>, published in 1959 demonstrated successful application of a Greedy algorithm for finding the shortest path through a graph. This blog will not state who invented Greedy or Hill Climbing algorithms.</li>\r\n
    <li>In 1961&nbsp;<a href="https://dl.acm.org/citation.cfm?doid=321075.321084" target="_blank" rel="nofollow noopener">Maron</a>&nbsp;introduced the&nbsp;<a href="https://en.wikipedia.org/wiki/Naive_Bayes_classifier" target="_blank" rel="nofollow noopener">Naive Bayes</a>&nbsp;albeit under a different name for text retrieval. The Naive Bayes is a powerful algorithm used for classification. It predicts membership probabilities for each class such as the probability that a given record or data point belongs to a particular class. A useful overview of Naive Bayes is provided by&nbsp;<a href="https://machinelearningmastery.com/naive-bayes-for-machine-learning/" target="_blank" rel="nofollow noopener">Jason Brownlee Naive Bayes for Machine Learning</a>.</li>\r\n
    <li>In 1965&nbsp;<a href="https://en.wikipedia.org/wiki/Expert_system" target="_blank" rel="nofollow noopener">Edward Feigenbaum</a>&nbsp;of Stanford Heuristic Program introduced Expert systems. Such systems were computer systems based upon replicating decision making of a human expert. Expert systems were&nbsp;&nbsp;mostly represented as if -then rules and designed with the intention for solving complicated problems via reason applied through bodies of knowledge.&nbsp;</li>\r\n
    <li><a href="https://developer.nvidia.com/blog/deep-learning-nutshell-history-training/" target="_blank" rel="nofollow noopener">Tim Dettmers</a>&nbsp;notes that "The earliest deep-learning-like algorithms that had multiple&nbsp;<a href="https://developer.nvidia.com/blog/parallelforall/deep-learning-nutshell-core-concepts#layer" target="_blank" rel="nofollow noopener">layers</a>&nbsp;of non-linear features can be traced back to Ivakhnenko and Lapa in 1965 (Figure below), who used thin but deep models with polynomial&nbsp;<a href="https://developer.nvidia.com/blog/parallelforall/deep-learning-nutshell-core-concepts#activation-function" target="_blank" rel="nofollow noopener">activation functions</a>&nbsp;which they analyzed with statistical methods. In each layer, they selected the best features through statistical methods and forwarded them to the next layer. They did not use&nbsp;<a href="https://developer.nvidia.com/blog/deep-learning-nutshell-history-training/#backpropagation" target="_blank" rel="nofollow noopener">backpropagation</a>&nbsp;to train their&nbsp;<a href="https://developer.nvidia.com/blog/parallelforall/deep-learning-nutshell-core-concepts#artificial-neural-network" target="_blank" rel="nofollow noopener">network</a>&nbsp;end-to-end&nbsp;but used layer-by-layer least squares fitting where previous layers were independently fitted from later layers.</li>\r\n
    </ul>\r\n
    <div class="slate-resizable-image-embed slate-image-embed__resize-full-width"><img src="/images/AI_Complicated_Graph.png" alt="AI_Complicated_Graph.png" /></div>\r\n
    <p>The architecture of the first known deep network which was trained by Ukrainian researcher Alexey Grigorevich Ivakhnenko in 1965. The feature selection steps after every layer lead to an ever-narrowing architecture which terminates when no further improvement can be achieved by the addition of another layer. Image of Prof. Alexey Ivakhnenko courtesy of&nbsp;<a href="https://en.wikipedia.org/wiki/File:Photo_of_Prof._Alexey_G._Ivakhnenko.jpg" target="_blank" rel="nofollow noopener">Wikipedia</a>.&nbsp;<a href="https://en.wikipedia.org/wiki/Alexey_Ivakhnenko" target="_blank" rel="nofollow noopener">Alexey Ivakhnenko</a>&nbsp;is most famous for developing the&nbsp;<a href="https://en.wikipedia.org/wiki/Group_Method_of_Data_Handling" target="_blank" rel="nofollow noopener">Group Method of Data Handling</a>&nbsp;(GMDH), a method of inductive statistical learning, for which he is sometimes referred to as the "<a href="https://en.wikipedia.org/wiki/Alexey_Ivakhnenko" target="_blank" rel="nofollow noopener">Father of Deep Learning</a>"</p>\r\n
    <ul>\r\n
    <li><a href="https://en.wikipedia.org/wiki/K-means_clustering#History" target="_blank" rel="nofollow noopener">The term "<em>k</em>-means"</a>&nbsp;was first used in "<a href="https://zbmath.org/?format=complete&amp;q=an:0214.46201" target="_blank" rel="nofollow noopener">Some methods for classification and analysis of multivariate observations</a>" by James MacQueen in 1967,&nbsp;though the idea goes back to&nbsp;<a href="https://en.wikipedia.org/wiki/K-means_clustering" target="_blank" rel="nofollow noopener">Hugo Stenhuis in 1956</a>.&nbsp;The standard algorithm was first proposed by&nbsp;<a href="https://en.wikipedia.org/wiki/K-means_clustering#History" target="_blank" rel="nofollow noopener">Stuart Lloyd&nbsp;in 1957&nbsp;</a>as a technique for pulse code modulation though it wasn't published as a&nbsp;<a href="https://cs.nyu.edu/~roweis/csc2515-2006/readings/lloyd57.pdf" target="_blank" rel="nofollow noopener">journal article until 1982</a>.&nbsp;In&nbsp;<a href="https://en.wikipedia.org/wiki/K-means_clustering#History" target="_blank" rel="nofollow noopener">1965, E. W. Forgy</a>&nbsp;published essentially the same method, which is why it is sometimes referred to as Lloyd-Forgy. K-means is an unsupervised learning algorithm.&nbsp;<a href="https://www.datascience.com/blog/k-means-clustering" target="_blank" rel="nofollow noopener">The&nbsp;K-means clustering algorithm</a>&nbsp;is used to find groups which have not been explicitly labeled in the data. This can be used to confirm business assumptions about what types of groups exist or to identify unknown groups in complex data sets.</li>\r\n
    <li>A* is an&nbsp;<a href="https://en.wikipedia.org/wiki/Informed_search_algorithm" target="_blank" rel="nofollow noopener">informed search algorithm</a>, or a&nbsp;<a href="https://en.wikipedia.org/wiki/Best-first_search" target="_blank" rel="nofollow noopener">best-first search</a>, meaning that it is formulated in terms of&nbsp;<a href="https://en.wikipedia.org/wiki/Weighted_graph" target="_blank" rel="nofollow noopener">weighted graphs</a>: starting from a specific starting&nbsp;<a href="https://en.wikipedia.org/wiki/Node_(graph_theory)" target="_blank" rel="nofollow noopener">node</a>&nbsp;of a graph, it aims to find a path to the given goal node having the smallest cost (least distance travelled, shortest time, etc.). It does this by maintaining a&nbsp;<a href="https://en.wikipedia.org/wiki/Tree_(data_structure)" target="_blank" rel="nofollow noopener">tree</a>&nbsp;of paths originating at the start node and extending those paths one edge at a time until its termination criterion is satisfied.&nbsp;<a href="https://ieeexplore.ieee.org/document/4082128" target="_blank" rel="nofollow noopener">A* algorithm was published by Hart et al. in 1968</a>.</li>\r\n
    </ul>\r\n
    <h2><strong>The AI Winters</strong></h2>\r\n
    <p>The field of AI faced a rocky road with a period of setbacks in research and development known as the AI Winters. There were&nbsp;<a href="https://en.wikipedia.org/wiki/AI_winter#cite_note-5" target="_blank" rel="nofollow noopener">two major winters in 1974–1980 and 1987–1993 (smaller episodes followed)</a>. The first AI winter occurred as&nbsp;<a href="https://en.wikipedia.org/wiki/AI_winter" target="_blank" rel="nofollow noopener">DARPA undertook funding cuts&nbsp;</a>in the early 1970s and&nbsp;Lighthill's report&nbsp;&nbsp;to UK Parliament&nbsp;in 1973 was critical of the lack of AI breakthroughs and resulted in a significant loss of confidence in AI. The second AI winter occurred at time when&nbsp;<a href="https://en.wikipedia.org/wiki/Machine_learning#cite_note-9" target="_blank" rel="nofollow noopener">probabilistic approaches hit barriers&nbsp;</a>as they failed to perform as intended due to issues with acquisition of data and representation. During this time&nbsp;<a href="https://en.wikipedia.org/wiki/Machine_learning#cite_note-9" target="_blank" rel="nofollow noopener">expert systems dominated in the 1980s</a>&nbsp;whilst statistical based approaches were out of favour.&nbsp;<a href="https://www.google.com/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=5&amp;ved=2ahUKEwj21q_y_PnhAhXt6eAKHUHQBOMQFjAEegQIAxAC&amp;url=https%3A%2F%2Ffaculty.psau.edu.sa%2Ffiledownload%2Fdoc-7-pdf-a154ffbcec538a4161a406abf62f5b76-original.pdf&amp;usg=AOvVaw0i7pLrlBs9LMW296xeV6b0" target="_blank" rel="nofollow noopener">Russell &amp; Norvig (2003 page 24</a>) noted that&nbsp;“Overall the AI industry boomed from a few million dollars in 1980 to billions of dollars in 1988. Soon after came a period called the “AI Winter”.</p>\r\n
    <p><a href="https://en.wikipedia.org/wiki/AI_winter" target="_blank" rel="nofollow noopener">Roger Schank</a>&nbsp;and&nbsp;<a href="https://en.wikipedia.org/wiki/Marvin_Minsky" target="_blank" rel="nofollow noopener">Marvin Minsky</a>—two leading AI researchers who had survived the "winter" of the 1970s—<a href="https://en.wikipedia.org/wiki/AI_winter#Appearance" target="_blank" rel="nofollow noopener">warned the business community that enthusiasm for AI</a>&nbsp;had spiraled out of control in the 1980s and that disappointment would certainly follow.&nbsp;<a href="https://en.wikipedia.org/wiki/AI_winter" target="_blank" rel="nofollow noopener">In 1987, three years after Minsky and Schank's&nbsp;prediction, the market for specialized AI hardware collapsed.&nbsp;</a></p>\r\n
    <p>It should be noted that although the AI Winters had an adverse impact upon AI research, there were some advancements in the late 1970s and 1980s, for example:</p>\r\n
    <ul>\r\n
    <li><a href="https://en.wikipedia.org/wiki/Beam_search#cite_note-1" target="_blank" rel="nofollow noopener">Raj Ready published Beam search</a>&nbsp;in 1977. It entails a heuristic search algorithm to expand those nodes that are the most promising from a particular set.&nbsp;&nbsp;</li>\r\n
    <li>In 1980&nbsp;<a href="https://en.wikipedia.org/wiki/Kunihiko_Fukushima" target="_blank" rel="nofollow noopener">Kunihiko Fukushima&nbsp;</a>introduced&nbsp;<a href="https://www.ncbi.nlm.nih.gov/pubmed/7370364" target="_blank" rel="nofollow noopener">neocognitron</a>&nbsp;that brought two of the fundamental layers of Convolutional Neural Networks (CNNs) known as convolutional layers and downsampling layers.</li>\r\n
    <li>In 1983&nbsp;<a href="https://science.sciencemag.org/content/220/4598/671" target="_blank" rel="nofollow noopener">Kirkpatrik et al. published Optimization by Simulated Annealing</a>&nbsp;It is an adaptation of&nbsp;&nbsp;<a href="https://aip.scitation.org/doi/10.1063/1.1699114" target="_blank" rel="nofollow noopener">Metropolis–Hastings algorithm</a>.&nbsp;<a href="https://en.wikipedia.org/wiki/Simulated_annealing" target="_blank" rel="nofollow noopener">Simulated Annealing</a>&nbsp;has an analogy with the process of hardening in steel production whereby the crystal structure of metal is lined up, the metal heated for the crystals to vibrate and irregularities shaken out as it slowly cools so as to allow a more ordered state. The idea is to escape local maxima by allowing some suboptimal moves. These moves gradually decrease&nbsp;their frequency as the “temperature” drops, and less random movement is allowed.</li>\r\n
    <li>In 1986 Ross Quilan introduced Iterative&nbsp;<a href="https://link.springer.com/content/pdf/10.1007/BF00116251.pdf" target="_blank" rel="nofollow noopener">Iterative Dichotomiser 3 (ID3)</a>&nbsp;Decision Tree Algorithm.&nbsp;A&nbsp;<a href="https://dzone.com/articles/machine-learning-with-decision-trees" target="_blank" rel="nofollow noopener">decision tree</a>&nbsp;comprises a tree where every branch node represents a choice between a number of alternatives and every leaf node represents a decision. It is a type of supervised learning algorithm (with a predefined target variable) that is mostly used in classification problems and works for both categorical and continuous input and output variables.</li>\r\n
    <li>In 1986 Randall C. Smith,&nbsp;Peter Cheeseman published&nbsp;<a href="https://journals.sagepub.com/doi/10.1177/027836498600500404" target="_blank" rel="nofollow noopener">On the Representation and Estimation of Spatial Uncertainty</a>&nbsp;that laid the foundations for&nbsp;<a href="https://en.wikipedia.org/wiki/Simultaneous_localization_and_mapping#cite_note-Smith1986-23" target="_blank" rel="nofollow noopener">Simultaneous localization and mapping (SLAM)</a>, that relates to the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent's location within it.</li>\r\n
    <li>In 1987 Boyd et al. applied Logistic Regression in&nbsp;&nbsp;<a href="https://www.ncbi.nlm.nih.gov/pubmed/3106646" target="_blank" rel="nofollow noopener">Evaluating Trauma Care:&nbsp;The TRISS Method for&nbsp;Trauma and Injury Severity Score</a>&nbsp;for predicting mortality in injured patients.&nbsp;</li>\r\n
    </ul>\r\n
    <p><a href="https://en.wikipedia.org/wiki/Machine_learning" target="_blank" rel="nofollow noopener">Neural Networks research had been mostly abandoned by AI and CS</a>&nbsp;around the same time of the AI winters and rise of Symbolic AI. Researchers Hopfield, Rumelhart and Hinton paved the way for the return of Neural Networks with&nbsp;with the reinvention of backpropagation in 1986.&nbsp;</p>\r\n
    <h2><strong>The Rise of Backpropagation</strong></h2>\r\n
    <p><a href="https://www.bbc.com/timelines/zypd97h" target="_blank" rel="nofollow noopener">The basics of continuous backpropagation</a>&nbsp;were derived in the context of&nbsp;<a href="https://en.wikipedia.org/wiki/Control_theory" target="_blank" rel="nofollow noopener">control theory</a>&nbsp;by&nbsp;<a href="https://arc.aiaa.org/doi/10.2514/8.5282" target="_blank" rel="nofollow noopener">Henry J. Kelley</a>&nbsp;in 1960 and by&nbsp;<a href="https://en.wikipedia.org/wiki/Backpropagation#cite_note-kelley1960-11" target="_blank" rel="nofollow noopener">Arthur E. Bryson</a>&nbsp;in 1961 and adopted for Neural Networks, backpropagation fell out of favour until work by Geoff Hinton and others using fast modern processors demonstrated its effectiveness. It is termed back-propagation due to the manner in which the training works whereby the direction is opposite to the flow of data (in a feed-forward network) and entails a recursive process.</p>\r\n
    <p><a href="https://developer.nvidia.com/blog/deep-learning-nutshell-history-training/" target="_blank" rel="nofollow noopener">Tim Dettmers</a>&nbsp;states that "the modern form was derived first by Linnainmaa in his 1970 masters thesis that included FORTRAN code for backpropagation but did not mention its application to neural networks. Even at this point, backpropagation was relatively unknown and very few documented applications of backpropagation existed until the early 1980s (e.g. Werbos in 1982)."</p>\r\n
    <p><a href="https://www.nature.com/articles/323533a0" target="_blank" rel="nofollow noopener">Rumehlart, Hinton and Williams</a>&nbsp;in Nature in 1986 stated “&nbsp;We describe a new learning procedure, back-propagation, for networks of neurons-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units.”</p>\r\n
    <p>&nbsp;<img src="/images/Input_X_Output_Y.png" alt="Input_X_Output_Y.png" /></p>\r\n
    <p>Image Source&nbsp;<a href="https://sebastianraschka.com/faq/docs/visual-backpropagation.html" target="_blank" rel="nofollow noopener">Sebastian Rashka Forward Propagation</a></p>\r\n
    <p>&nbsp;<img src="/images/Input_X_Target_Y.png" alt="Input_X_Target_Y.png" /></p>\r\n
    <p>Image Source&nbsp;<a href="https://sebastianraschka.com/faq/docs/visual-backpropagation.html" target="_blank" rel="nofollow noopener">Sebastian Rashka Back-propagation</a></p>\r\n
    <p><a href="https://en.wikipedia.org/wiki/Yann_LeCun" target="_blank" rel="nofollow noopener">Yann LeCun</a>&nbsp;was a postdoctoral research associate in Geofferey Hinton's lab at the University of Toronto&nbsp;from 1987 to 1988.&nbsp;Yann LeCun published a paper on the "<a href="http://yann.lecun.com/exdb/publis/pdf/lecun-88.pdf" target="_blank" rel="nofollow noopener">A Theoretical Framework for backpropagation</a>". In 1989&nbsp;<a href="http://yann.lecun.com/exdb/publis/pdf/lecun-89e.pdf" target="_blank" rel="nofollow noopener">Yann LeCun et al</a>. applied&nbsp;back-propagation to recognise the handwritten zip code digits provided by the US Postal service&nbsp;. The technique became a&nbsp;<a href="https://en.wikipedia.org/wiki/Convolutional_neural_network#History" target="_blank" rel="nofollow noopener">foundation of modern computer vision</a>.&nbsp;</p>\r\n
    <h2><strong>Machine Learning Steps Forwards</strong></h2>\r\n
    <p>The late 1980s and in particular the 1990s is a period where there was an increased emphasis upon the intersection of CS and Statistics resulting in a shift towards probabilistic, data-driven approaches to AI. Some key examples are set out in the section below.</p>\r\n
    <ul>\r\n
    <li><a href="https://en.wikipedia.org/wiki/Boosting_(machine_learning)" target="_blank" rel="nofollow noopener">Boosting</a>&nbsp;is a&nbsp;<a href="https://en.wikipedia.org/wiki/Ensemble_learning" target="_blank" rel="nofollow noopener">machine learning ensemble</a>&nbsp;<a href="https://en.wikipedia.org/wiki/Meta-algorithm" target="_blank" rel="nofollow noopener">meta-algorithm</a>&nbsp;for primarily reducing&nbsp;<a href="https://en.wikipedia.org/wiki/Supervised_learning#Bias-variance_tradeoff" target="_blank" rel="nofollow noopener">bias</a>, as well as variance that enables the weak learners to convert to strong ones. A&nbsp;<a href="https://en.wikipedia.org/wiki/Boosting_(machine_learning)" target="_blank" rel="nofollow noopener">weak learner</a>&nbsp;can be described as a classifier that labels examples better than random guessing, and is hence only slightly correlated with the true classification, whereas a strong learner is well correlated with the true classification. Boosting arose from questions raised by&nbsp;<a href="https://en.wikipedia.org/wiki/Michael_Kearns_(computer_scientist)" target="_blank" rel="nofollow noopener">Kearns</a>&nbsp;and&nbsp;<a href="https://en.wikipedia.org/wiki/Leslie_Valiant" target="_blank" rel="nofollow noopener">Valiant</a>&nbsp;in&nbsp;<a href="https://en.wikipedia.org/wiki/Boosting_(machine_learning)" target="_blank" rel="nofollow noopener">1988 and 1989&nbsp;</a>and the positive answer given by&nbsp;<a href="https://en.wikipedia.org/wiki/Robert_Schapire" target="_blank" rel="nofollow noopener">Robert Schapire</a>&nbsp;that went on to lead to the development of Boosting.</li>\r\n
    <li>The work of&nbsp;<a href="https://en.wikipedia.org/wiki/Hugh_F._Durrant-Whyte" target="_blank" rel="nofollow noopener">Hugh F. Durrant-Whyte&nbsp;</a>in the early 1990s enabled autonomous vehicles to deal with uncertainty as well as the ability to localize themselves in spite of noisy sensor readings using SLAM.</li>\r\n
    <li>In 1992 research work on Support Vector Machines (SVM) was published&nbsp;by&nbsp;<a href="https://en.wikipedia.org/wiki/Support-vector_machine#cite_note-HavaSiegelmann-2" target="_blank" rel="nofollow noopener">Bernhard E. Boser, Isabelle M. Guyon and&nbsp;Vladimir N. Vapnik</a>. The invention of the initial SVM algorithm occurred in 1963 by&nbsp;<a href="https://en.wikipedia.org/wiki/Support-vector_machine#cite_note-HavaSiegelmann-2" target="_blank" rel="nofollow noopener">Vladimir N. Vapnik&nbsp;and&nbsp;Alexey Ya. Chervonenkis</a>.&nbsp;The current standard&nbsp;(soft margin) was proposed by&nbsp;<a href="https://en.wikipedia.org/wiki/Support-vector_machine#History" target="_blank" rel="nofollow noopener">Corinna Cortes&nbsp;and Vapnik in 1993</a>&nbsp;and&nbsp;<a href="https://link.springer.com/article/10.1007%2FBF00994018" target="_blank" rel="nofollow noopener">published in 1995</a>. An&nbsp;<a href="https://www.analyticsvidhya.com/blog/2017/09/understaing-support-vector-machine-example-code/" target="_blank" rel="nofollow noopener">SVM</a>&nbsp;is a Supervised Machine Learning algorithm&nbsp;which can be used for both classification or regression challenges. However,&nbsp;&nbsp;it is mostly used&nbsp;in&nbsp;<a href="https://www.analyticsvidhya.com/blog/2017/09/understaing-support-vector-machine-example-code/" target="_blank" rel="nofollow noopener">classification problems</a>.&nbsp;SVMs use cases include&nbsp;<a href="http://detectionhttps//data-flair.training/blogs/applications-of-svm/" target="_blank" rel="nofollow noopener">Bioinformatics</a>.&nbsp;In last few years,&nbsp;<a href="http://detectionhttps//data-flair.training/blogs/applications-of-svm/" target="_blank" rel="nofollow noopener">SVM algorithms have been extensively applied for protein remote homology detection</a>. These algorithms have been widely used for identifying among biological sequences. For example classification of genes, patients on the basis of their genes, and many other biological problems.</li>\r\n
    </ul>\r\n
    <div class="slate-resizable-image-embed slate-image-embed__resize-full-width"><img src="/images/Support_Vectors.png" alt="Support_Vectors.png" /></div>\r\n
    <p><em>Source for Image Above&nbsp;<a href="https://www.analyticsvidhya.com/blog/2017/09/understaing-support-vector-machine-example-code/" target="_blank" rel="nofollow noopener">Analytics Vidhya Understanding Support Vector Machine algorithm</a></em></p>\r\n
    <p>&nbsp;</p>\r\n
    <ul>\r\n
    <li>In 1992 Altman published&nbsp;<a href="https://www.tandfonline.com/doi/abs/10.1080/00031305.1992.10475879" target="_blank" rel="nofollow noopener">An Introduction to Kernel and Nearest-Neighbor Nonparametric Regression</a>&nbsp;known as&nbsp;<a href="https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm" target="_blank" rel="nofollow noopener">The&nbsp;k-nearest neighbours algorithm&nbsp;(k-NN)</a>&nbsp;which is a non -parametric method used for classification and regression.</li>\r\n
    <li><a href="https://en.wikipedia.org/wiki/DBSCAN" target="_blank" rel="nofollow noopener">Density-based spatial clustering of applications with noise&nbsp;(DBSCAN)</a>&nbsp;is a&nbsp;<a href="https://en.wikipedia.org/wiki/Data_clustering" target="_blank" rel="nofollow noopener">data clustering</a>&nbsp;algorithm proposed by Martin Ester,&nbsp;<a href="https://en.wikipedia.org/wiki/Hans-Peter_Kriegel" target="_blank" rel="nofollow noopener">Hans-Peter Kriegel</a>, Jörg Sander and Xiaowei Xu in 1996.</li>\r\n
    </ul>\r\n
    <div class="slate-resizable-image-embed slate-image-embed__resize-full-width" data-image-href="http:&lt;a href=" random-forest-simple-explanation-377895a60d2d=""><img src="/images/Random_Forest_Simplified.png" alt="Random_Forest_Simplified.png" /></div>\r\n
    <p><em>Source for Random Forest Image above Will Koehrsen Random Forest Simple Explanation</em></p>\r\n
    <ul>\r\n
    <li>Another example is the research work conducted by&nbsp;<a href="https://web.archive.org/web/20160417030218/http://ect.bell-labs.com/who/tkh/publications/papers/odt.pdf" target="_blank" rel="nofollow noopener">Ho in 1995</a>&nbsp;and&nbsp;<a href="https://ieeexplore.ieee.org/document/709601" target="_blank" rel="nofollow noopener">1998 on Random Forests</a>. A&nbsp;<a href="https://towardsdatascience.com/the-random-forest-algorithm-d457d499ffcd" target="_blank" rel="nofollow noopener">Random Forest is a supervised learning algorithm</a>. It creates a forest and makes it random. A good overview of the Random Forest is provided by&nbsp;<a href="https://towardsdatascience.com/the-random-forest-algorithm-d457d499ffcd" target="_blank" rel="nofollow noopener">Niklas Donge</a>&nbsp;whereby he explains that the forest is made from an ensemble of Decision Trees often&nbsp;<a href="https://towardsdatascience.com/the-random-forest-algorithm-d457d499ffcd" target="_blank" rel="nofollow noopener">trained with the bagging method.</a>&nbsp;The overall objective of applying bagging is a combined learning by the models resulting in an improved overall result. A major&nbsp;<a href="https://towardsdatascience.com/the-random-forest-algorithm-d457d499ffcd" target="_blank" rel="nofollow noopener">advantage of random forest</a>&nbsp;is, that it can be used for both classification and regression problems, which form the majority of current machine learning systems.</li>\r\n
    <li><a href="https://towardsdatascience.com/the-random-forest-algorithm-d457d499ffcd" target="_blank" rel="nofollow noopener">Niklas Donge&nbsp;</a>provides use case examples for The Random Forest Algorithm "In Banking it is used for example to detect customers who will use the bank’s services more frequently than others and repay their debt in time. In this domain it is also used to detect fraud customers who want to scam the bank. In finance, it is used to determine a stock’s behaviour in the future. In the healthcare domain it is used to identify the correct combination of components in medicine and to analyze a patient’s medical history to identify diseases.&nbsp;"</li>\r\n
    <li>1997 saw the introduction of<strong>&nbsp;</strong><a href="https://en.wikipedia.org/wiki/AdaBoost" target="_blank" rel="nofollow noopener"><strong>AdaBoost</strong>, (Adaptive&nbsp;Boosting</a>), a&nbsp;machine learning&nbsp;<a href="https://en.wikipedia.org/wiki/Meta-algorithm" target="_blank" rel="nofollow noopener">meta-algorithm</a>&nbsp;formulated by&nbsp;<a href="https://en.wikipedia.org/wiki/Yoav_Freund" target="_blank" rel="nofollow noopener">Yoav Freund</a>&nbsp;and&nbsp;<a href="https://en.wikipedia.org/wiki/Robert_Schapire" target="_blank" rel="nofollow noopener">Robert Schapire</a>, who won the 2003&nbsp;<a href="https://en.wikipedia.org/wiki/G%C3%B6del_Prize" target="_blank" rel="nofollow noopener">Gödel Prize</a>&nbsp;for their work.&nbsp;It can be used in conjunction with many other types of learning algorithms to improve performance. The output of the other learning algorithms ('weak learners') is combined into a weighted sum that represents the final output of the boosted classifier. AdaBoost is adaptive in the sense that subsequent weak learners are tweaked in favor of those instances misclassified by previous classifiers. AdaBoost is sensitive to noisy data and&nbsp;<a href="https://en.wikipedia.org/wiki/Outlier" target="_blank" rel="nofollow noopener">outliers</a>. In some problems it can be less susceptible to the&nbsp;<a href="https://en.wikipedia.org/wiki/Overfitting_(machine_learning)" target="_blank" rel="nofollow noopener">overfitting</a>&nbsp;problem than other learning algorithms. The individual learners can be weak, but as long as the performance of each one is slightly better than random guessing, the final model can be proven to converge towards a strong learner.</li>\r\n
    <li>The Machine Learning approach of&nbsp;<a href="https://en.wikipedia.org/wiki/Gradient_boosting" target="_blank" rel="nofollow noopener">Gradient Boosting</a>&nbsp;is applied for regression and classification problems that result in a predictive model comprising an ensemble of weak prediction models, usually decision trees. The idea of gradient boosting originated in the observation by&nbsp;<a href="https://en.wikipedia.org/wiki/Leo_Breiman" target="_blank" rel="nofollow noopener">Leo Breiman</a>&nbsp;a paper published in 1997 entitled "<a href="https://statistics.berkeley.edu/sites/default/files/tech-reports/486.pdf" target="_blank" rel="nofollow noopener">Arcing at the Edge</a>". For more on Gradient Boosting see "<a href="https://medium.com/mlreview/gradient-boosting-from-scratch-1e317ae4587d" target="_blank" rel="nofollow noopener">Gradient Boosting from&nbsp;scratch</a>" by Price Grover.</li>\r\n
    <li>Gradient Boosting algorithms proved very popular over the last five years. Key examples include&nbsp;<a href="https://machinelearningmastery.com/gentle-introduction-xgboost-applied-machine-learning/" target="_blank" rel="nofollow noopener">XG Boost</a>&nbsp;that proved highly successful in&nbsp;<a href="https://www.kaggle.com/" target="_blank" rel="nofollow noopener">Kaggle</a>&nbsp;competitions with tabular or structured data (see&nbsp;<a href="https://arxiv.org/pdf/1603.02754.pdf" target="_blank" rel="nofollow noopener">Tianqi Chen and Carlos Guestrin</a>&nbsp;2016),&nbsp;<a href="https://github.com/microsoft/LightGBM" target="_blank" rel="nofollow noopener">Light Gradient Boosting Machine (Light GBM) introduced my Microsoft&nbsp;</a>in 2017, and&nbsp;<a href="https://yandex.com/dev/catboost/" target="_blank" rel="nofollow noopener">CatBooset introduced by Yandex in 2017</a>. For more on Ensemble Learning see the article by&nbsp;<a href="https://machinelearningmastery.com/gradient-boosting-with-scikit-learn-xgboost-lightgbm-and-catboost/" target="_blank" rel="nofollow noopener">Jason Brownlee Ensemble Learning</a>&nbsp;and an article published in&nbsp;<a href="https://www.kdnuggets.com/" target="_blank" rel="nofollow noopener">kdnuggets.com</a>&nbsp;entitled&nbsp;<a href="https://www.kdnuggets.com/2018/03/catboost-vs-light-gbm-vs-xgboost.html" target="_blank" rel="nofollow noopener">CatBoost vs. Light GBM vs. XGBoost</a>.</li>\r\n
    <li>In 1997 Sepp Hochreiter and Jurgen Schmidhuber&nbsp;published their work on the&nbsp;<a href="https://www.mitpressjournals.org/doi/10.1162/neco.1997.9.8.1735" target="_blank" rel="nofollow noopener">Long Short-Term Memory (LSTM).</a>&nbsp;The LSTM is an Artificial Recurrent Neural Network (RNN) technique that applies feedback connections enabling it to process single data points and also complete sequences of data (speech or video). It has been applied to areas such as recognising speech or handwriting and its powerful capabilities with time series data has seen many from within the financial sector using it to model and predict stock market and credit analysis.</li>\r\n
    </ul>\r\n
    <p>&nbsp;<img src="/images/Ht.png" alt="Ht.png" width="1033" height="396" /></p>\r\n
    <p><em><a href="https://www.analyticsvidhya.com/blog/2017/12/fundamentals-of-deep-learning-introduction-to-lstm/" target="_blank" rel="nofollow noopener">Source for Image Above Analytics Vidyha Introduction to LSTM</a></em></p>\r\n
    <ul>\r\n
    <li><a href="https://en.wikipedia.org/wiki/Convolutional_neural_network#History" target="_blank" rel="nofollow noopener">LeNet-5, a pioneering 7-level convolutional network by&nbsp;LeCun&nbsp;et al. in 1998</a>&nbsp;that classifies digits, was applied by several banks to recognize hand-written numbers on checks (British English:&nbsp;cheques) digitized in 32x32 pixel images.</li>\r\n
    </ul>\r\n
    <p><a href="http://yann.lecun.com/exdb/lenet/" target="_blank" rel="nofollow noopener">Yann LeCunn</a>&nbsp;stated the following:</p>\r\n
    <p>"CNNs are a special kind of multi-layer neural networks. Like almost every other neural network they are trained with a version of the back-propagation algorithm. Where they differ is in the architecture.&nbsp;"&nbsp;</p>\r\n
    <p>"CNNs are designed to recognize visual patterns directly from pixel images with minimal preprocessing.&nbsp;"</p>\r\n
    <p>"They can recognize patterns with extreme variability (such as handwritten characters), and with robustness to distortions and simple geometric transformations.&nbsp;"&nbsp;</p>\r\n
    <p>"LeNet-5 is our latest convolutional network designed for handwritten and machine-printed character recognition.&nbsp;"</p>\r\n
    <h2><strong>"Deep Learning" is Born</strong></h2>\r\n
    <div class="slate-resizable-image-embed slate-image-embed__resize-full-width"><img src="/images/Cat_AI.jpeg" alt="Cat_AI.jpeg" /></div>\r\n
    <p><em>Source for image above&nbsp;<a href="https://www.bbc.com/timelines/zypd97h" target="_blank" rel="nofollow noopener">BBC<br /><br /></a></em></p>\r\n
    <p><a href="https://www.thestar.com/news/world/2015/04/17/how-a-toronto-professors-research-revolutionized-artificial-intelligence.html" target="_blank" rel="nofollow noopener">In 2006, Hinton made a breakthrough&nbsp;</a>whereby Deep Learning (Deep Neural Networks) beat other AI algorithms with&nbsp;<a href="https://www.thestar.com/news/world/2015/04/17/how-a-toronto-professors-research-revolutionized-artificial-intelligence.html" target="_blank" rel="nofollow noopener">Kate Allen of the Star noting</a>&nbsp;"In quick succession, Neural Networks, rebranded as “Deep Learning,” began beating traditional AI in every critical task: recognizing speech, characterizing images, generating natural, readable sentences. Google, Facebook, Microsoft and nearly every other technology giant have embarked on a Deep Learning gold rush, competing for the world’s tiny clutch of experts. Deep Learning startups, seeded by hundreds of millions in Venture capital, are mushrooming."</p>\r\n
    <p>The breakthrough that led to a resurgence of interest in Neural Networks referred to above related to a paper published in 2006 by Hinton et al.&nbsp;<a href="https://www.cs.toronto.edu/~hinton/absps/fastnc.pdf" target="_blank" rel="nofollow noopener">A fast learning algorithm for deep belief nets</a>. It is noted by&nbsp;Andrey Kurenkov that "the movement that is ‘Deep Learning’ can very persuasively be said to have started precisely with this paper. But, more important than the name was the idea — that Neural Networks with many layers really could be trained well, if the weights are initialized in a clever way rather than randomly."</p>\r\n
    <h2>GPU Implementations</h2>\r\n
    <p><a href="https://blogs.nvidia.com/blog/2009/12/16/whats-the-difference-between-a-cpu-and-a-gpu/" target="_blank" rel="nofollow noopener">Kevin Krewel observed</a>&nbsp;in an NVIDIA Blog that "The CPU (Central Processing Unit) has often been called the brains of the PC. But increasingly, that brain is being enhanced by another part of the PC – the GPU (Graphics Processing Unit)".</p>\r\n
    <p>The application of GPUs have played an important role in the training and scaling of Deep Learning. For example a blog hosted on the&nbsp;<a href="https://blogs.nvidia.com/blog/2009/12/16/whats-the-difference-between-a-cpu-and-a-gpu/" target="_blank" rel="nofollow noopener">NVIDIA</a>&nbsp;website observed that Insight64 principal analyst Nathan Brookwood described the unique capabilities of the GPU this way: “GPUs are optimized for taking huge batches of data and performing the same operation over and over very quickly, unlike PC microprocessors, which tend to skip all over the place.”</p>\r\n
    <p>Examples of GPUs playing an important role in enabling Deep Learning techniques such as&nbsp;<a href="https://en.wikipedia.org/wiki/Convolutional_neural_network#GPU_implementations" target="_blank" rel="nofollow noopener">CNNs to perform faster</a>&nbsp;include:</p>\r\n
    <ul>\r\n
    <li>In 2004&nbsp;<a href="https://www.sciencedirect.com/science/article/pii/S0031320304000524?via%3Dihub" target="_blank" rel="nofollow noopener">K. S. Oh and K. Jung</a>&nbsp;showed that a GPU could accelerate the implementation of a standard Neural Net by 20 times relative to a CPU.</li>\r\n
    <li><a href="https://en.wikipedia.org/wiki/Convolutional_neural_network#GPU_implementations" target="_blank" rel="nofollow noopener">The first GPU-implementation of a CNN</a>&nbsp;was demonstrated&nbsp;<a href="https://hal.inria.fr/inria-00112631/document" target="_blank" rel="nofollow noopener">in 2006 by K. Chellapilla et al.</a>&nbsp;with their implementation demonstrating that the GPU was four times faster than an equivalent implementation on a CPU.</li>\r\n
    <li><a href="https://www.mitpressjournals.org/doi/10.1162/NECO_a_00052" target="_blank" rel="nofollow noopener">Dan Ciresan et al. in 2010</a>&nbsp;demonstrated that multi layer deep neural networks can be trained at speed using a&nbsp;<a href="https://en.wikipedia.org/wiki/Convolutional_neural_network#GPU_implementations" target="_blank" rel="nofollow noopener">GPU with supervised learning via backpropagation</a>&nbsp;with their network outperforming other machine learning approaches in relation to the hand-written digits benchmark on MNIST.</li>\r\n
    </ul>\r\n
    <h2><strong>2012 A Key Year</strong></h2>\r\n
    <p>In 2012 a&nbsp;<a href="https://phys.org/news/2012-06-google-team-self-teaching-cats.html" target="_blank" rel="nofollow noopener">Stanford Team led by Andrew NG and Jeff Dean</a>&nbsp;connected 16,000 computer processors and used the pool of 10 million images, taken from YouTube videos and demonstrated that an artificial neural network could successfully teach itself to recognize cats.</p>\r\n
    <p>A key moment in modern AI history also occurred in&nbsp;<a href="https://en.wikipedia.org/wiki/AlexNet" target="_blank" rel="nofollow noopener">2012 with&nbsp;AlexNet</a>&nbsp;that was developed by Alex Krizhevsky and Illya Sutskever along with Geoffrey Hinton (who was Alex Krizhevsky's PhD supervisor). AlexNet is a CNN that uses a GPU during training.&nbsp;<a href="https://en.wikipedia.org/wiki/AlexNet" target="_blank" rel="nofollow noopener">AlexNet</a>&nbsp;competed in the ImageNet Challenge in 2012 with the network significantly outperforming its rivals. This was a key moment in Machine Learning history to demonstrate the power of Deep Learning and Graphical Processing Units GPUs in the field of Computer Vision.</p>\r\n
    <p>As the&nbsp;<a href="https://www.economist.com/special-report/2016/06/23/from-not-working-to-neural-networking" target="_blank" rel="nofollow noopener">Economist reported&nbsp;</a>"In 2010 the winning system could correctly label an image 72% of the time (for humans, the average is 95%). In 2012 one team, led by Geoff Hinton at the University of Toronto, achieved a jump in accuracy to 85%, thanks to a novel technique known as “Deep Learning”. This brought further rapid improvements, producing an accuracy of 96% in the ImageNet Challenge in 2015 and surpassing humans for the first time."</p>\r\n
    <h2><strong>The Explosion in Data</strong></h2>\r\n
    <p>As the internet, mobile and social media grew in reach, so too did the creation of digital data and this in turn has fuelled the ongoing development of Machine Learning. The importance of data for Deep Learning is shown in the chart by Andrew NG.</p>\r\n
    <p><img src="/images/Explosion_in_Data.jpeg" alt="Explosion_in_Data.jpeg" /></p>\r\n
    <p><em>Source for chart above&nbsp;<a href="https://www.slideshare.net/ExtractConf/andrew-ng-chief-scientist-at-baidu" target="_blank" rel="noopener">Andrew NG</a>.</em>&nbsp;</p>\r\n
    <p>According to a&nbsp;<a href="https://www.mediapost.com/publications/article/291358/90-of-todays-data-created-in-two-years.html" target="_blank" rel="nofollow noopener">report from IBM Marketing Cloud,</a>&nbsp;“10 Key Marketing Trends For 2017,” 90% of the data in the world today has been created in the last two years alone, at 2.5 quintillion bytes of data a day!</p>\r\n
    <div class="slate-resizable-image-embed slate-image-embed__resize-full-width"><img src="/images/The_4_Vs_of_Big_Data_2021.jpeg" alt="The_4_Vs_of_Big_Data_2021.jpeg" width="725" height="445" /></div>\r\n
    <p><em><a href="https://www.ibmbigdatahub.com/infographic/four-vs-big-data" target="_blank" rel="nofollow noopener">Source for the image above IBM The Four V's of Big Data Graphic</a></em></p>\r\n
    <p>Data has continued to grow and Visual Capitalist shows what happens in a minute on the internet.</p>\r\n
    <div class="slate-resizable-image-embed slate-image-embed__resize-full-width" data-image-href="http://www.visualcapitalist.com/what-happens-in-an-internet-minute-in-2019/"><img src="/images/A_Minute_on_the_Internet_in_2021.png" alt="A_Minute_on_the_Internet_in_2021.png" width="800" height="796" /></div>\r\n
    <p>&nbsp;<em>Source for Graphic Above&nbsp;<a href="https://developer.nvidia.com/deep-learning" target="_blank" rel="nofollow noopener">Visual Capitalist What Happens In An Internet Minute<br /><br /></a></em></p>\r\n
    <p>It should be no surprise that many of the leading researchers and breakthroughs in Deep Learning now originate from research teams within the tech and social media giants. They also possess an advantage with the volume and variety of data that they can access. Furthermore, the Social Media giants such as Facebook and Twitter have algorithmic influence over what we see and interact with and hence this is leading to debates over ethics and potential regulation of the social media giants that is likely to continue.</p>\r\n
    <h2><strong>Overfitting &amp; Regularisation of Models</strong></h2>\r\n
    <p>Overfitting occurs when a model learns the training data too well. Deep Neural Networks entail stacking many hidden layers. Jeremy Jordan in&nbsp;<a href="https://www.jeremyjordan.me/deep-neural-networks-preventing-overfitting/" target="_blank" rel="nofollow noopener">Deep Neural Networks: preventing overfitting</a>&nbsp;observed "This deep stacking allows us to learn more complex relationships in the data. However,&nbsp;because we're increasing the complexity of the model, we're also more prone to potentially overfitting our data."</p>\r\n
    <p>&nbsp;<img src="/images/Just_Right_Just_Left.png" alt="Just_Right_Just_Left.png" width="544" height="200" /></p>\r\n
    <p>&nbsp;</p>\r\n
    <p><em><a href="https://www.datarobot.com/wiki/overfitting/" target="_blank" rel="nofollow noopener">Source for Image Above DataRobot Overfitting<br /><br /></a></em></p>\r\n
    <p><a href="https://machinelearningmastery.com/overfitting-and-underfitting-with-machine-learning-algorithms/" target="_blank" rel="nofollow noopener">Jason Brownlee Machine Learning Mastery&nbsp;</a>explains that this happens "when a model learns the detail and noise in the training data to the extent that it negatively impacts the performance of the model on new data. This means that the noise or random fluctuations in the training data is picked up and learned as concepts by the model. The problem is that these concepts do not apply to new data and negatively impact the models ability to generalize."</p>\r\n
    <p><a href="https://arxiv.org/abs/1207.0580" target="_blank" rel="nofollow noopener">Hinton et al. published a paper in 2012</a>&nbsp;relating to dropout as a means of regularisation of networks to reduce the risk of overfitting.&nbsp;<a href="https://en.wikipedia.org/wiki/Dropout_(neural_networks)" target="_blank" rel="nofollow noopener">Dropout</a>&nbsp;entails dropping out hidden along with visible units in a Neural Network. A good summary of using dropout is provided by&nbsp;<a href="https://machinelearningmastery.com/dropout-for-regularizing-deep-neural-networks/" target="_blank" rel="nofollow noopener">Jason Brownlee A Gentle Introduction to Dropout for Regularizing Deep Neural Networks</a>.</p>\r\n
    <p>The technique of&nbsp;<a href="https://arxiv.org/pdf/1502.03167.pdf" target="_blank" rel="nofollow noopener">batch normalization&nbsp;</a>for Deep Neural Networks were developed by&nbsp;<a href="https://arxiv.org/pdf/1502.03167.pdf" target="_blank" rel="nofollow noopener">Sergey Ioffe and Christian Szegedy&nbsp;</a>of Google to reduce the risk of overfitting and for allowing each layer of a network to learn by itself a little bit more independently of other layers as noted by&nbsp;<a href="https://towardsdatascience.com/batch-normalization-in-neural-networks-1ac91516821c" target="_blank" rel="nofollow noopener">Firdaouss Doukkali</a>.</p>\r\n
    <h2>Generative Adversarial Network (GAN)</h2>\r\n
    <p>GANs are part of the Neural Network family and entail unsupervised learning. They entail two Neural Networks, a Generator and a Discriminator, that compete with one and another in a zero-sum game.</p>\r\n
    <p>The training involves an iterative approach whereby the Generator seeks to generate samples that may trick the Discriminator to believe that they are genuine, whilst the Discriminator seeks to identify the real samples from the samples that are not genuine. The end result is a Generator that is relatively capable at producing samples relative to the target ones. The method is used to generate visual images such as photos that may appear on the surface to be genuine to the human observer.</p>\r\n
    <p>In 2014&nbsp;<a href="https://en.wikipedia.org/wiki/Ian_Goodfellow" target="_blank" rel="nofollow noopener">Ian Goodfellow</a>&nbsp;<em>et al.</em>&nbsp;introduced the name GAN in a paper that popularized the concept and influenced subsequent work. Examples of the achievements of GANs include the generation of faces in 2017 as demonstrated in a paper entitled "This Person Does Not Exist: Neither Will Anything Eventually with&nbsp;AI."</p>\r\n
    <p>Furthermore GANs made their entrance into the artistic stage with the&nbsp;<a href="https://arxiv.org/abs/1706.07068" target="_blank" rel="nofollow noopener">Creative Adversarial Network (CAN)</a>. In 2018 it was reported in the New York Times that art created by a GAN was sold in the article&nbsp;<a href="https://www.nytimes.com/2018/10/25/arts/design/ai-art-sold-christies.html" target="_blank" rel="nofollow noopener">AI Art at Christie’s Sells for $432,500</a>.&nbsp;</p>\r\n
    <div class="slate-resizable-image-embed slate-image-embed__resize-full-width" data-image-href="http://www.nytimes.com/2018/10/25/arts/design/ai-art-sold-christies.html"><img src="/images/GAN_Graph.jpeg" alt="GAN_Graph.jpeg" width="700" height="525" /></div>\r\n
    <p>“Edmond de Belamy, from La Famille de Belamy,” by the French art collective Obvious, was sold on Thursday at Christie’s New York. Credit Christie's.</p>\r\n
    <h2><strong>Other Notable Developments with Deep Learning</strong></h2>\r\n
    <p>The research publication of Andrej Karpathy and Dr Fei-Fei Li from Stanford in 2015&nbsp;<a href="https://cs.stanford.edu/people/karpathy/cvpr2015.pdf" target="_blank" rel="nofollow noopener">Deep Visual-Semantic Alignments for Generating Image Descriptions</a>&nbsp;featured in the&nbsp;<a href="https://www.nytimes.com/2014/11/18/science/researchers-announce-breakthrough-in-content-recognition-software.html" target="_blank" rel="nofollow noopener">New York Times&nbsp;</a>" Two groups of scientists, working independently, have created Artificial Intelligence software capable of recognizing and describing the content of photographs and videos with far greater accuracy than ever before, sometimes even mimicking human levels of understanding."</p>\r\n
    <div class="slate-resizable-image-embed slate-image-embed__resize-full-width"><img src="/images/Other_Notable_Developments_with_Deep_Learning.png" alt="Other_Notable_Developments_with_Deep_Learning.png" width="620" height="519" /></div>\r\n
    <p><em>Source For Image Above:&nbsp;<a href="https://cs.stanford.edu/people/karpathy/cvpr2015.pdf" target="_blank" rel="nofollow noopener">Deep Visual-Semantic Alignments for Generating Image Descriptions<br /><br /></a></em></p>\r\n
    <p>In 2015&nbsp;<a href="https://arxiv.org/pdf/1502.04156.pdf" target="_blank" rel="nofollow noopener">Yoshua Bengio et al. published "Towards Biologically Plausible Deep Learning</a>" stating "We explore more biologically plausible versions of deep representation learning, focusing here mostly on unsupervised learning but developing a learning mechanism that could account for Supervised, Unsupervised and Reinforcement Learning."</p>\r\n
    <h2><strong>Reinforcement Learning</strong></h2>\r\n
    <p>Reinforcement Learning entails Q-Learning and involves an agent taking appropriate actions in order to maximize a reward in a particular situation. It is used by an intelligent agent to solve for the optimal behaviour or path that the agent should take in a specific situation.</p>\r\n
    <p>Ronald Parr and Stuart Russell of UC Berkeley in 1998 published "<a href="https://papers.nips.cc/paper/1384-reinforcement-learning-with-hierarchies-of-machines.pdf" target="_blank" rel="nofollow noopener">Reinforcement Learning with Hierarchies&nbsp;of&nbsp;Machines</a>" and observed that "This allows for the use of prior knowledge to reduce the search space and provides a framework in which knowledge can be transferred across problems and in which component solutions can be recombined to solve larger and more complicated problems."</p>\r\n
    <p>Robotics expert Pieter Abbeel&nbsp;of UC Berkeley is well known for his work on reinforcement learning.<a href="https://ai.stanford.edu/~ang/papers/icml04-apprentice.pdf" target="_blank" rel="nofollow noopener">&nbsp;In 2004 Abeel and Andrew NG published&nbsp;Apprenticeship Learning via Inverse Reinforcement Learning</a>. The task of learning from an expert is called&nbsp;apprenticeship learning&nbsp;(also learning by watching, imitation learning, or learning from demonstration)</p>\r\n
    <h2><strong>Deep Reinforcement Learning</strong></h2>\r\n
    <p><a href="https://arxiv.org/abs/1811.12560" target="_blank" rel="nofollow noopener">Francoise-Levet et al.</a>&nbsp;observed that "Deep Reinforcement Learning is the combination of Reinforcement Learning (RL) and Deep Learning. This field of research has been able to solve a wide range of complex decision-making tasks that were previously out of reach for a machine. Thus, Deep RL opens up many new applications in domains such as healthcare, robotics, smart grids, finance, and many more."</p>\r\n
    <p>The British company DeepMind Technologies was founded in 2010 before being acquired by Google in 2014. DeepMind used Deep Q- Learning with an application of CNNs whereby the layers were tiled to mimic the effects of receptive fields.</p>\r\n
    <p>They obtained fame when they produced a Neural Network that was able to learn to play video games by analysing the behaviour of pixels on a screen. As&nbsp;<a href="https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf" target="_blank" rel="nofollow noopener">Minh et at. 2013 "Playing Atari with Deep Reinforcement Learning"&nbsp;</a>observed "We present the first Deep Learning model to successfully learn control policies directly from high-dimensional sensory input using Reinforcement Learning."</p>\r\n
    <p>Furthermore DeepMind built a Neural Network with the ability to access external memory – a Neural Turing Machine,&nbsp;<a href="https://arxiv.org/abs/1605.06065" target="_blank" rel="nofollow noopener">Saturo et al. 2016 "One-shot Learning with Memory-Augmented Neural Networks</a>".</p>\r\n
    <p>Moreover, in 2016 DeepMind's AlphaGo achieved headlines in the&nbsp;<a href="https://www.bbc.co.uk/news/technology-35785875" target="_blank" rel="nofollow noopener">BBC</a>&nbsp;news in March 2016 by beating the second ranked player in the World, and then the number one ranked player Ke Jie in 2017. The AlphaGo Neural Network also utilises a Monte Carlo Tree Search Algorithm to discover moves. It was considered an important milestone in AI history as the game of GO was considered a difficult challenge that computers would not defeat humans as quickly as AlphaGo did.</p>\r\n
    <div class="slate-resizable-image-embed slate-image-embed__resize-full-width" data-image-href="http://www.bbc.co.uk/news/technology-35785875"><img src="/images/Alphago_Playing.jpeg" alt="Alphago_Playing.jpeg" /></div>\r\n
    <p><em>Source for Image above&nbsp;<a href="https://www.bbc.co.uk/news/technology-35785875" target="_blank" rel="nofollow noopener">BBC Artificial intelligence: Google's AlphaGo beats Go master Lee Se-dol<br /><br /></a></em></p>\r\n
    <p>In 2017, Google DeepMind published a&nbsp;<a href="https://www.nature.com/articles/nature24270" target="_blank" rel="nofollow noopener">paper</a>&nbsp;relating to&nbsp;<a href="https://deepmind.com/blog/alphago-zero-learning-scratch/" target="_blank" rel="nofollow noopener">AlphaGo Zero</a>&nbsp;where it was noted that:</p>\r\n
    <p>"The paper introduces AlphaGo Zero, the latest evolution of&nbsp;<a href="https://deepmind.com/research/alphago/" target="_blank" rel="nofollow noopener">AlphaGo</a>, the first computer program to defeat a world champion at the ancient Chinese game of Go. Zero is even more powerful and is arguably the strongest Go player in history."</p>\r\n
    <p>"It is able to do this by using a novel form of&nbsp;<a href="https://en.wikipedia.org/wiki/Reinforcement_learning" target="_blank" rel="nofollow noopener">Reinforcement Learning</a>, in which AlphaGo Zero becomes its own teacher. The system starts off with a neural network that knows nothing about the game of Go. It then plays games against itself, by combining this neural network with a powerful search algorithm. As it plays, the neural network is tuned and updated to predict moves, as well as the eventual winner of the games."</p>\r\n
    <p>"After just three days of self-play training, AlphaGo Zero emphatically defeated the previously&nbsp;<a href="https://research.googleblog.com/2016/01/alphago-mastering-ancient-game-of-go.html" target="_blank" rel="nofollow noopener">published version of AlphaGo</a>&nbsp;- which had itself&nbsp;<a href="https://deepmind.com/research/alphago/alphago-korea/" target="_blank" rel="nofollow noopener">defeated 18-time world champion Lee Sedol</a>&nbsp;- by 100 games to 0. After 40 days of self training, AlphaGo Zero became even stronger, outperforming the version of AlphaGo known as “Master”, which has defeated the world's best players and&nbsp;<a href="https://deepmind.com/research/alphago/alphago-china/" target="_blank" rel="nofollow noopener">world number one Ke Jie</a>."</p>\r\n
    <p>Deep Reinforcement Learning is an exciting field of cutting edge AI research with potential applications in areas such as autonomous cars, for example in 2018 Alex Kendall demonstrated the first application of Deep Reinforcement Learning to autonomous driving in a paper entitled "<a href="https://arxiv.org/abs/1807.00412" target="_blank" rel="nofollow noopener">Learning to Drive in A Day</a>."</p>\r\n
    <div class="slate-resizable-image-embed slate-image-embed__resize-full-width"><img src="/images/Autonomous_Driving_2019.jpeg" alt="Autonomous_Driving_2019.jpeg" width="704" height="528" /></div>\r\n
    <p>In 2019&nbsp;<a href="https://www.wired.co.uk/article/deepmind-starcraft-alphastar" target="_blank" rel="nofollow noopener">Alex Lee</a>&nbsp;observed that DeepMind has finally thrashed humans at StarCraft for real. For more detail see:&nbsp;<a href="https://deepmind.com/blog/article/alphastar-mastering-real-time-strategy-game-starcraft-ii" target="_blank" rel="nofollow noopener">AlphaStar: Mastering the Real-Time Strategy Game StarCraft II</a>.</p>\r\n
    <p>In 2020 Jason Dorrier noted in an article entitled "<a href="https://singularityhub.com/2020/07/26/deepminds-newest-ai-programs-itself-to-make-all-the-right-decisions/" target="_blank" rel="nofollow noopener">DeepMind’s Newest AI Programs Itself to Make All the Right Decisions</a>" stating that "In a paper recently published on the pre-print server arXiv, a database for research papers that haven’t been peer reviewed yet, the&nbsp;<a href="https://arxiv.org/pdf/2007.08794.pdf" target="_blank" rel="nofollow noopener">DeepMind team described a new deep reinforcement learning algorithm</a>&nbsp;that was able to discover its own value function—a critical programming rule in deep reinforcement learning—from scratch."</p>\r\n
    <h2><strong>Evolutionary Genetic Algorithms</strong></h2>\r\n
    <p>John Holland published a book on&nbsp;<a href="https://en.wikipedia.org/wiki/John_Henry_Holland" target="_blank" rel="nofollow noopener">Genetic Algorithms</a>&nbsp;(GAs) in 1975&nbsp;and taken further by&nbsp;<a href="https://en.wikipedia.org/wiki/Genetic_algorithm#" target="_blank" rel="nofollow noopener">David E Goldberg</a>&nbsp;in 1989&nbsp;.</p>\r\n
    <h2>Introduction to Genetic Algorithm &amp; Their Application in Data Science</h2>\r\n
    <p>&nbsp;<img src="/images/Introduction_to_Genetic_Algorithm_Their_Application_in_Data_Science.png" alt="Introduction_to_Genetic_Algorithm_Their_Application_in_Data_Science.png" /></p>\r\n
    <p><em>Image Source for GA above&nbsp;<a href="https://www.analyticsvidhya.com/blog/2017/07/introduction-to-genetic-algorithm/" target="_blank" rel="nofollow noopener">AnalyticsVidya<br /><br /></a></em></p>\r\n
    <p>GAs are a variant of local beam search whereby a successor state is generated by combining two parent states. A state is represented as a binary string and the population is a number of states that are randomly generated. The quality of a particular state is assessed via a fitness function and the next generation of states are produced with the higher quality states having larger values. A process known as Crossover entails selecting pairs using a Roulette Wheel or a Tournament. The crossover point is randomly selected and the resulting chromosomes represent new states. However, Crossover might not be enough if the population does not contain examples that have each bit of the chromosome at both possible values parts of the search space are inaccessible. Hence mutation is used that entails a low probability of flipping a random bit at each cross-over step. The process is repeated from “selection” until the desired fitness levels are attained. For more details on GAs see Analytics Vidhya&nbsp;<a href="https://www.analyticsvidhya.com/blog/2017/07/introduction-to-genetic-algorithm/" target="_blank" rel="nofollow noopener">Introduction to Genetic Algorithm &amp; their application in Data Science</a>.</p>\r\n
    <h2><strong>Neuroevolution&nbsp;</strong></h2>\r\n
    <p>The field of&nbsp;<a href="https://www.oreilly.com/ideas/neuroevolution-a-different-kind-of-deep-learning" target="_blank" rel="nofollow noopener">Neuroevolution</a>&nbsp;is an area of research&nbsp;where&nbsp;<a href="https://eng.uber.com/deep-neuroevolution/" target="_blank" rel="nofollow noopener">Neural Networks are optimized through evolutionary algorithms</a>.&nbsp;<a href="https://www.oreilly.com/ideas/neuroevolution-a-different-kind-of-deep-learning" target="_blank" rel="nofollow noopener">Kenneth O'Stanely</a>&nbsp;describes the field as follows"Put simply, Neuroevolution is a subfield within AI and Machine Learning that consists of trying to trigger an evolutionary process similar to the one that produced our brains, except inside a computer. In other words, Neuroevolution seeks to develop the means of evolving Neural Networks through evolutionary algorithms."</p>\r\n
    <p>Uber Labs proposed&nbsp;<a href="https://eng.uber.com/deep-neuroevolution/" target="_blank" rel="nofollow noopener">Genetic algorithms as a competitive alternative for training Deep Neural Networks</a>&nbsp;stating "Using a new technique we invented to efficiently evolve DNNs, we were&nbsp;<a href="https://arxiv.org/abs/1712.06567" target="_blank" rel="nofollow noopener">surprised to discover</a>&nbsp;that an extremely<a href="https://github.com/uber-common/deep-neuroevolution" target="_blank" rel="nofollow noopener">&nbsp;simple Genetic Algorithm</a>&nbsp;(GA) can train Deep Convolutional Networks with over 4 million parameters to play Atari games from pixels, and on many games outperforms modern deep reinforcement learning (RL) algorithms (e.g. DQN and A3C) or evolution strategies (ES), while also being faster due to better parallelization.&nbsp;"</p>\r\n
    <p>For more on Neuroevolution see O' Stanley et al. who provide an overview in a paper entitled "<a href="https://www.nature.com/articles/s42256-018-0006-z" target="_blank" rel="nofollow noopener">Designing neural networks through Neuroevolution</a>"</p>\r\n
    <h2><strong>Transformers and Self-Attention</strong></h2>\r\n
    <p>Transformers with Self-Attention mechanisms have been revolutionising the fields of NLP and text data since their introduction in 2017.</p>\r\n
    <p>The most high profile and advanced Transformer based model GPT-3 (see below) has attained a great deal of attention in the press recently including<a href="https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3" target="_blank" rel="nofollow noopener">&nbsp;authoring an article that was published in the Guardian.</a></p>\r\n
    <p>Attention first appeared in the NLP domain in 2014 with&nbsp;<a href="http://emnlp2014.org/papers/pdf/EMNLP2014179.pdf" target="_blank" rel="nofollow noopener">Cho et al.</a>&nbsp;and&nbsp;<a href="https://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf" target="_blank" rel="nofollow noopener">Sutskever et al.</a>with both groups of researchers separately arguing in favour of the use of two recurrent neural networks (RNN), that were termed an encoder and decoder.</p>\r\n
    <p>For a more detailed look at the history of the development of Attention and Transformers in this domain see the article by&nbsp;<a href="https://buomsoo-kim.github.io/attention/2020/01/01/Attention-mechanism-1.md/" target="_blank" rel="nofollow noopener">Buomsoo Kim "Attention in Neural Networks - 1. Introduction to attention mechanism."</a></p>\r\n
    <p>A selection of key papers to note are show below:</p>\r\n
    <ul>\r\n
    <li>Seq2Seq, or RNN Encoder-Decoder (<a href="http://emnlp2014.org/papers/pdf/EMNLP2014179.pdf" target="_blank" rel="nofollow noopener">Cho et al. (2014)</a>,&nbsp;<a href="https://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf" target="_blank" rel="nofollow noopener">Sutskever et al. (2014)</a>)</li>\r\n
    <li>Alignment models (<a href="https://arxiv.org/pdf/1409.0473.pdf" target="_blank" rel="nofollow noopener">Bahdanau et al. (2015)</a>,&nbsp;<a href="https://arxiv.org/pdf/1508.04025.pdf" target="_blank" rel="nofollow noopener">Luong et al. (2015)</a>)</li>\r\n
    <li>Visual attention (<a href="http://proceedings.mlr.press/v37/xuc15.pdf" target="_blank" rel="nofollow noopener">Xu et al. (2015)</a>)</li>\r\n
    <li>Hierarchical attention (<a href="https://www.aclweb.org/anthology/N16-1174.pdf" target="_blank" rel="nofollow noopener">Yang et al. (2016)</a>)</li>\r\n
    <li>Transformer&nbsp;<a href="https://papers.nips.cc/paper/7181-attention-is-all-you-need.pdf" target="_blank" rel="nofollow noopener">Vaswani et al. Attention Is All You Need (Google 2017)</a></li>\r\n
    </ul>\r\n
    <div class="slate-resizable-image-embed slate-image-embed__resize-full-width"><img src="/images/Transformers_Self_Attention.png" alt="Transformers_Self_Attention.png" width="1071" height="429" /></div>\r\n
    <p><em><a href="https://buomsoo-kim.github.io/attention/2020/01/01/Attention-mechanism-1.md/" target="_blank" rel="nofollow noopener"><em>Source for image above&nbsp;</em></a><em><a href="https://buomsoo-kim.github.io/attention/2020/01/01/Attention-mechanism-1.md/" target="_blank" rel="nofollow noopener">Bum Soo Kim "Attention in Neural Networks - 1. Introduction to attention mechanism."</a></em><br /><br /></em></p>\r\n
    <p><strong>Some notable Transformer models to be aware of (non-exhaustive list):</strong></p>\r\n
    <p><a href="https://arxiv.org/abs/1810.04805" target="_blank" rel="nofollow noopener">Bidirectional Encoder&nbsp;Representations from Transformers (BERT)</a>&nbsp;Google 2018</p>\r\n
    <p><a href="https://openai.com/blog/language-unsupervised/" target="_blank" rel="nofollow noopener">Open AI GPT&nbsp;June 11, 2018</a></p>\r\n
    <p><a href="https://openai.com/blog/better-language-models/" target="_blank" rel="nofollow noopener">GPT-2</a>&nbsp;February 14, 2019 Open AI</p>\r\n
    <p><a href="https://developer.nvidia.com/blog/language-modeling-using-megatron-a100-gpu/" target="_blank" rel="nofollow noopener">Megatron NVIDIA</a></p>\r\n
    <p><a href="https://venturebeat.com/2020/02/10/microsoft-trains-worlds-largest-transformer-language-model/" target="_blank" rel="nofollow noopener">DeepSpeed</a>&nbsp;Microsoft AI &amp; Research</p>\r\n
    <p><a href="https://openai.com/blog/openai-api/" target="_blank" rel="nofollow noopener">GPT-3</a>&nbsp;May / June 2020 Open AI</p>\r\n
    <p>For a more detailed overview of Transformers see&nbsp;<a href="https://www.linkedin.com/pulse/rise-transformers-imtiaz-adam/" target="_blank" rel="noopener">Transformers on the rise in AI! The Rise of the Transformers: Explaining the Tech Underlying GPT-3</a>.</p>\r\n
    <h2><strong>NeuroSymbolic AI</strong></h2>\r\n
    <p>In recent years we've also noted increasing research in the field of Neuro Symbolic AI that combines Symbolic (or Logical AI) with Deep Neural Networks. An example from 2019 would be&nbsp;<a href="https://arxiv.org/abs/1904.12584" target="_blank" rel="nofollow noopener">The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural Supervision</a></p>\r\n
    <p>NeuroSymbolic AI: is defined by&nbsp;<a href="https://mitibmwatsonailab.mit.edu/category/neuro-symbolic-ai/" target="_blank" rel="nofollow noopener">MIT-IBMWatsonAILab</a>&nbsp;as a fusion of AI methods that combine neural networks, which extract statistical structures from raw data files – context about image and sound files, for example – with symbolic representations of problems and logic.&nbsp;"By fusing these two approaches, we’re building a new class of AI that will be far more powerful than the sum of its parts. These neuro-symbolic hybrid systems require less training data and track the steps required to make inferences and draw conclusions. They also have an easier time transferring knowledge across domains. We believe these systems will usher in a new era of AI where machines can learn more like the way humans do, by connecting words with images and mastering abstract concepts."</p>\r\n
    <h2><strong>Federated Learning</strong></h2>\r\n
    <p>A key paper in the history of Federated Learning was presented by Google in 2016 and entitled "&nbsp;<a href="https://arxiv.org/abs/1602.05629" target="_blank" rel="nofollow noopener">Communication-Efficient Learning of Deep Networks from Decentralized Data</a>"</p>\r\n
    <p>The paper noted that "Modern mobile devices have access to a wealth of data suitable for learning models, which in turn can greatly improve the user experience on the device. For example, language models can improve speech recognition and text entry, and image models can automatically select good photos. However, this rich data is often privacy sensitive, large in quantity, or both, which may preclude logging to the data center and training there using conventional approaches. We advocate an alternative that leaves the training data distributed on the mobile devices, and learns a shared model by aggregating locally-computed updates. We term this decentralized approach Federated Learning.&nbsp;"</p>\r\n
    <p>Federated Learning: also known as&nbsp;<strong>collaborative learning</strong>&nbsp;is&nbsp;a&nbsp;technique in Machine Learning that enables an algorithm to be trained across many decentralised servers (or devices) that possess data locally without exchanging them.&nbsp;<a href="https://arxiv.org/abs/2007.00914" target="_blank" rel="nofollow noopener">Differential Privacy</a>&nbsp;aims to enhance data privacy protection by measuring the privacy loss in the communication among the elements of Federated Learning. The technique may deal with the key challenges of data privacy and security relating to heterogeneous data and impact sectors such as the Internet of Things (IoT), healthcare, banking, insurance and other areas with data privacy and collaborative learning are of key importance and may well become a key technique in the era of 5G and Edge Computing as the AIIoT scales.</p>\r\n
    <h2><strong>A Summary of the Last 5 Years to where AI is Today</strong></h2>\r\n
    <p>Machine Learning has continued to grow in use cases and across sectors. The development of open source libraries such as&nbsp;<a href="https://en.wikipedia.org/wiki/TensorFlow" target="_blank" rel="nofollow noopener">TensorFlow</a>&nbsp;in 2015 and the decision in 2017 to support&nbsp;<a href="https://en.wikipedia.org/wiki/Keras" target="_blank" rel="nofollow noopener">Keras</a>&nbsp;(authored by François Chollet) in TensorFlow's core library has helped drive the continued deployment of Deep Learning across sectors.</p>\r\n
    <p>In 2017 it was reported that Facebook brought<a href="https://www.infoworld.com/article/3159120/facebook-brings-gpu-powered-machine-learning-to-python.html" target="_blank" rel="nofollow noopener">&nbsp;Pytorch</a>&nbsp;open source library, based upon Torch framework to users.</p>\r\n
    <p>Although significant challenges remain in relation to achieving AGI, Google DeepMind published a paper entitled&nbsp;<a href="https://medium.com/mcgill-artificial-intelligence-review/deepmind-just-published-a-mind-blowing-paper-pathnet-f72b1ed38d46" target="_blank" rel="nofollow noopener">PathNet: Evolution Channels Gradient Descent in Super Neural Networks in 2017</a>&nbsp;that showed a potential path way to AGI.&nbsp;</p>\r\n
    <div class="slate-resizable-image-embed slate-image-embed__resize-full-width" data-image-href="http://arxiv.org/pdf/1701.08734.pdf"><img src="/images/A_Summary_of_the_Last_5_Years_to_where_AI_is_Today.png" alt="A_Summary_of_the_Last_5_Years_to_where_AI_is_Today.png" width="907" height="683" /></div>\r\n
    <p><em>Source for image above:&nbsp;<a href="https://arxiv.org/pdf/1701.08734.pdf" target="_blank" rel="nofollow noopener">PathNet: Evolution Channels Gradient Descent in Super Neural Networks<br /><br /></a></em></p>\r\n
    <p>In 2017 Hinton et al. published a paper “<a href="https://openreview.net/pdf?id=HJWLfGWRb" target="_blank" rel="nofollow noopener">Matrix Capsules with&nbsp;EM&nbsp;Routing</a>” proposing capsules. It remains an area of research and an article by&nbsp;<a href="https://www.oreilly.com/ideas/introducing-capsule-networks" target="_blank" rel="nofollow noopener">Aurélien Géronprovides</a>&nbsp;a good overview of Capsules, noting that an advantage that they have is that&nbsp;"CapsNets can generalize well using much less training data."</p>\r\n
    <p><a href="http://news.mit.edu/2019/smarter-training-neural-networks-0506" target="_blank" rel="nofollow noopener">Adam Conner-Simons</a>&nbsp;reported that&nbsp;in 2019 researchers at MIT were working on&nbsp;Smarter training of neural networks and observed that the “MIT CSAIL project shows the neural nets we typically train contain smaller “subnetworks” that can learn just as well, and often faster”.</p>\r\n
    <p>In March 2019,&nbsp;<a href="https://www.theverge.com/2019/3/27/18280665/ai-godfathers-turing-award-2018-yoshua-bengio-geoffrey-hinton-yann-lecun" target="_blank" rel="nofollow noopener">The Verge</a>&nbsp;reported that Yoshua Bengio, Geoffrey Hinton, and Yann LeCun, the ‘Godfathers of AI’ were honored with the Turing Award, the Nobel Prize of computing for laying the foundations for modern AI with Deep Learning.</p>\r\n
    <p>I believe that the journey of AI will continue onto the edge (on device), further facilitated with the roll out of 5G networks around the world as we move to a world of Intelligent Internet of Things with sensors and devices communicating with each other and with humans.</p>\r\n
    <p>Furthermore, leading AI practitioners and researchers believe that further understanding of our now brain will be key for the continued development of AI. Dr Anna Becker, CEO of Endotech.io and PhD in AI, explained that "understanding the human brain in more detail will be the key to the next generation of AI during the 2020s". This view is echoed in an interview with AI researcher Sejnowski published in&nbsp;<a href="https://www.techrepublic.com/article/the-deep-learning-revolution-how-understanding-the-brain-will-let-us-supercharge-ai/" target="_blank" rel="nofollow noopener">TechRepublic</a>&nbsp;"If we want to develop machines with the same cognitive abilities as humans — to think, reason and understand — then Sejnowski says we need to look at how intelligence emerged in the brain."</p>\r\n
    <p>We've been experiencing AI entering our everyday world over the past decade and this trend is set to continue to accelerate as we move into the era of 5G and Edge Computing.</p>\r\n
    <p>Furthermore, in 2017&nbsp;<a href="https://www.cnbc.com/2017/03/13/mark-cuban-the-worlds-first-trillionaire-will-be-an-ai-entrepreneur.html" target="_blank" rel="nofollow noopener">CNBC</a>&nbsp;reported that Billionaire Mark Cuban said that "the world’s first trillionaire will be an AI entrepreneur." Mark Cuban further stated “Whatever you are studying right now if you are not getting up to speed on Deep Learning, Neural Networks, etc., you lose.” The article added that Google had added $9Bn to its revenues due to AI.</p>\r\n
    <p>In relation to the finance sector,&nbsp;<a href="https://thefinancialbrand.com/74626/ai-transform-disrupt-banking-financial-wef-trends-analysis/" target="_blank" rel="nofollow noopener">Jim Marous stated in an article posted in 2018 in the Financial Brand&nbsp;</a>(<a href="https://twitter.com/JimMarous" target="_blank" rel="nofollow noopener">@JimMarous)&nbsp;</a>that "AI is poised to massively disrupt traditional financial services... Ongoing developments in AI have the potential to significantly change the way back offices operate and the experiences consumers receive from financial institutions."</p>\r\n
    <p>Blake Morgan (<a href="https://twitter.com/BlakeMichelleM" target="_blank" rel="nofollow noopener">@BlakeMichelleM</a>) in 2019 provided<a href="https://www.forbes.com/sites/blakemorgan/2019/04/25/20-examples-of-machine-learning-used-in-customer-experience/" target="_blank" rel="nofollow noopener">&nbsp;20 Examples Of Machine Learning Used In Customer Experience</a>&nbsp;with examples including "Guests at Disney parks use MagicBand wristbands as room keys, tickets and payment. The wristband collects information of where the guests are in the park to recommend experiences and even route people around busy areas."</p>\r\n
    <p>Blake Morgan also observed that "JP Morgan streamlines correspondence with machine learning that analyzes documents and extracts important information. Instead of taking hours to sort through complicated documents, customers can now have information in seconds."</p>\r\n
    <p>In May 2019 the&nbsp;<a href="https://www.bbc.co.uk/news/health-48334649" target="_blank" rel="nofollow noopener">BBC&nbsp;</a>reported that researchers at Northwestern University in Illinois and Google conducted studies with Deep Learning for the screening of lung cancer and found that their approach was more effective than the radiologists when examining a single CT scan and was equally effective when doctors had multiple scans to go on. The results published in&nbsp;<a href="https://www.nature.com/articles/s41591-019-0447-x" target="_blank" rel="nofollow noopener">Nature</a>&nbsp;showed the AI could boost cancer detection by 5% while also cutting false-positives (people falsely diagnosed with cancer) by 11%.</p>\r\n
    <p>Increasingly AI will also be about successful deployment and scaling across the sectors of the economy such as finance and health care. Spiros Magaris (<a href="https://twitter.com/SpirosMargaris" target="_blank" rel="nofollow noopener">@SpirosMargaris</a>) observed in a discussion with Brett King and myself that those financial institutions that adopt AI and use it to successfully solve customer experience are the ones that will thrive in the next decade.</p>\r\n
    <p>As&nbsp;<a href="https://qz.com/1170185/the-master-algorithm-and-augmented-the-two-books-helping-chinas-xi-jinping-understand-ai/" target="_blank" rel="nofollow noopener">Brett King's book Augmented</a>&nbsp;(<a href="https://twitter.com/BrettKing" target="_blank" rel="nofollow noopener">@BrettKing)&nbsp;</a>notes "Ongoing developments in AI have the potential to significantly change the way back offices operate and the experiences consumers receive from financial institutions...This new era will be based on four themes: AI, experience design, smart infrastructure, and health care technology."</p>\r\n
    <p>The impact of the Covid-19 crisis that the world has been experiencing in 2020 is that it is likely to accelerate the adoption of AI and Digital Transformation. This was noted by the&nbsp;<a href="https://ec.europa.eu/jrc/en/news/artificial-intelligence-and-digital-transformation-early-lessons-coronavirus-crisis" target="_blank" rel="nofollow noopener">EU Commission who observed that "The pandemic boosted AI adoption</a>&nbsp;"The researchers noted an increased adoption and use of AI in scientific and medical research, in particular in applications such as telemedicine, medical diagnosis, epidemiological studies, and clinical management of patients.</p>\r\n
    <p>"Similarly, the crisis made it possible to overcome barriers in the sharing of data between commercial entities, and between business and governments."</p>\r\n
    <p>"The pandemic gave a boost to the digital transition of companies, public administrations and schools. Plans that had maybe dragged on for years, had to be implemented at very short notice, overcoming many technological, organisational, skill gaps, and cultural barriers."</p>\r\n
    <p>McKinsey expressed similar views in an article entitled "<a href="https://www.mckinsey.com/business-functions/mckinsey-digital/our-insights/digital-strategy-in-a-time-of-crisis" target="_blank" rel="nofollow noopener">Digital strategy in a time of crisis</a>" stating that "In one European&nbsp;<a href="https://dmexco.com/stories/is-the-coronavirus-pandemic-an-engine-for-the-digital-transformation/" target="_blank" rel="nofollow noopener">survey</a>, about 70 percent of executives from Austria, Germany, and Switzerland said the pandemic is likely to accelerate the pace of their digital transformation." In addition McKinsey also noted "Companies that have&nbsp;<a href="https://www.mckinsey.com/business-functions/mckinsey-digital/our-insights/how-digital-reinventors-are-pulling-away-from-the-pack" target="_blank" rel="nofollow noopener">already invested</a>&nbsp;in AI capabilities will find themselves significantly advantaged."</p>\r\n
    <p>The next stages of the AI journey will be research breakthroughs in enabling Deep Learning techniques to train with smaller data sets, advancements in Deep Reinforcement learning, Neuroevolution and NeuroSymbolic AI, and continued development of Transformer models including the type of research that Facebook proposed in 2020 with&nbsp;Detection Transformer (DETR) model that combines CNNs with Transformers into an model architecture that can recognize objects in an image in a single pass all at once, see&nbsp;<a href="https://ai.facebook.com/blog/end-to-end-object-detection-with-transformers" target="_blank" rel="nofollow noopener">End-to-end object detection with Transformers</a>.</p>\r\n
    <p>The significantly faster speed and reduced latency that 5G will enable along with substantially increased capacity to connect devices and hence enable machine to machine communication on the edge as well as emerging new technologies (AR, VR) working alongside AI techniques such Deep Learning that will in turn lead to new opportunities across the economy. In addition I expect more diversity in AI development in the future including both&nbsp;<a href="https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai" target="_blank" rel="nofollow noopener">racial diversity</a>&nbsp;and also more women in cutting edge AI research in addition to&nbsp;<a href="https://en.wikipedia.org/wiki/Fei-Fei_Li" target="_blank" rel="nofollow noopener">Dr Fei Fe Li</a>&nbsp;and&nbsp;<a href="https://www.statnews.com/2020/09/23/regina-barzilay-mit-artificial-intelligence-award/" target="_blank" rel="nofollow noopener">Regina Barzilay</a>&nbsp;and as noted by&nbsp;<a href="https://www.forbes.com/sites/mariyayao/2017/05/18/meet-20-incredible-women-advancing-a-i-research/#7f57894c26f9" target="_blank" rel="nofollow noopener">Mariya Yao</a>.&nbsp;</p>
    """
  -user: Proxies\__CG__\App\Entity\User {#1092 …}
  -createdAt: DateTime @1635172263 {#1145
    date: 2021-10-25 14:31:03.0 UTC (+00:00)
  }
  -updatedAt: DateTime @1639517068 {#1150
    date: 2021-12-14 21:24:28.0 UTC (+00:00)
  }
  -deletedAt: null
  -category: App\Entity\Category {#445 …}
  -status: "published"
  -imageCard: Proxies\__CG__\App\Entity\File {#1184 …}
  -ImageHeader: Proxies\__CG__\App\Entity\File {#1184 …}
  -featured: false
  -mainView: false
  -clicks: 27850
  -comments: Doctrine\ORM\PersistentCollection {#1125 …}
  -reviewed_at: DateTime @1635172263 {#1148
    date: 2021-10-25 14:31:03.0 UTC (+00:00)
  }
  -metakey: "The Journey of Artificial Intelligence and Machine Learning"
  -metadesc: "Artificial intelligence (AI)\u{A0}is increasingly affecting the world around us. It is increasingly making an impact in retail, financial services, along with other sectors of the economy."
  -robots: null
  -publishedAt: DateTime @1635172263 {#1147
    date: 2021-10-25 14:31:03.0 UTC (+00:00)
  }
  -canonical: null
  -superTag: null
}

AppController :: renderFooter (token = 396e60)

Key Value
_controller
"App\Controller\AppController::renderFooter"
_format
"html"
_locale
"en"
_stopwatch_token
"09729c"

CookieController :: renderCookie (token = 0b494c)

Key Value
_controller
"App\Controller\CookieController::renderCookie"
_format
"html"
_locale
"en"
_stopwatch_token
"5291e0"