summary refs log tree commit diff stats
path: root/lib/system/chcks.nim
Commit message (Collapse)AuthorAgeFilesLines
* fixes fieldDefect loses enum type info in ORC; consistent with VM and refc ↵ringabout2023-05-291-0/+4
| | | | | (#21954) fixes fieldDefect loses enum type info in ORC
* moderate system cleanup & refactor (#20355)metagn2022-09-281-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * system refactor, move out 600 lines * compilation, slice, backwardsindex, misc_num moved out of system * some procs/types moved into arithmetics, basic_types * system no longer depends on syncio * some procs moved around to fit with their surroundings * make exceptions an import, old ops to misc_num * move instantiationInfo back * move back nim version, fix windows echo * include compilation * better docs for imported modules, fix unsigned ops also remove ze, ze64, toU8, toU16, toU32 with nimPreviewSlimSystem * fix terminal * workaround IC test & weird csize bug, changelog * move NimMajor etc back to compilation, rebase for CI * try ic fix * form single `indices`, slim out TaintedString, try fix IC * fix CI, update changelog, addQuitProc * fix CI * try fix CI * actually fix CI finally hopefully * Update lib/system/compilation.nim Co-authored-by: ringabout <43030857+ringabout@users.noreply.github.com> * update kochdocs * hopefully fix csize uses for slimsystem * fix tquit Co-authored-by: ringabout <43030857+ringabout@users.noreply.github.com>
* move formatfloat out of system (#20195)ringabout2022-08-241-0/+2
| | | | | | | | | | | | | | | * move formatfloat out of system * fixes doc * Update changelog.md * careless * fixes * deprecate system/formatfloat * better handling
* Better range error messages (#19867)Tanguy2022-06-151-0/+3
| | | | | | | * Better range error messages * Revert to old behavior for arrays * Small corrections
* Update chcks.nim (#19540)rockcavera2022-02-171-1/+1
|
* properly fix #10053 ; `FieldDefect` msg now shows discriminant value + ↵Timothee Cour2021-08-131-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | lineinfo, in all backends (c,vm,js) (#11955) * fix #10053 FieldError for vm * fixup * FieldError now also shows runtime value of discriminant * fix field error reporting in vm * also report culprit line info in err msg * fix errors for newruntime 2 * fix for js * fixup * PRTEMP4 * works * works * works perfect * refactor * std/private/repr_impl * suppport --gc:arc * cleanup * refactor * simplify * simplify * simplify * fixup * move out compiler.vmgen.genCustom * fixup * fixup * add tests * revert compiler/debugutils.nim * simplify reprDiscriminant * fixup * lib/std/private/repr_impl.nim -> lib/system/repr_impl.nim * try to fix D20210812T165220 * honor --declaredlocs * control toFileLineCol via --declaredlocs
* [os:standalone]fix #14011 (#17564)flywind2021-03-291-2/+8
|
* fixes #15210 [backport:1.2] (#15237)Andreas Rumpf2020-08-311-0/+3
| | | | | * fixes #15210 [backport:1.2] * use patched version of bigints library
* Error -> Defect for defects (#13908)Jacek Sieka2020-04-281-18/+18
| | | | | | | | | | | | | | * Error -> Defect for defects The distinction between Error and Defect is subjective, context-dependent and somewhat arbitrary, so when looking at an exception, it's hard to guess what it is - this happens often when looking at a `raises` list _without_ opening the corresponding definition and digging through layers of inheritance. With the help of a little consistency in naming, it's at least possible to start disentangling the two error types and the standard lib can set a good example here.
* catchable defects (#13626)Andreas Rumpf2020-03-121-3/+15
| | | | | | | | | | * allow defects to be caught even for --exceptions:goto (WIP) * implemented the new --panics:on|off switch; refs https://github.com/nim-lang/RFCs/issues/180 * new implementation for integer overflow checking * produce a warning if a user-defined exception type inherits from Exception directly * applied Timothee's suggestions; improved the documentation and replace the term 'checked runtime check' by 'panic' * fixes #13627 * don't inherit from Exception directly
* fixes #10053 (#11951)Timothee Cour2019-08-151-1/+1
|
* styleCheck: make the compiler and large parts of the stdlib compatible with ↵Araq2019-07-101-2/+2
| | | | --styleCheck:error
* fixes #11205Araq2019-05-281-0/+5
|
* hotfix for 32bit unsigned 'range' checking; incomplete, unknown why some ↵Araq2019-05-271-0/+6
| | | | operations produce range checks
* --newruntime: work in progressAndreas Rumpf2019-03-141-43/+45
|
* code cleanup: there is no MoveError and sinks are always availableAndreas Rumpf2019-03-041-5/+0
|
* Initial version of the hot-code reloading support for native targets (#10729)zah2019-02-261-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * squashed work by Zahary * squashing a ton of useful history... otherwise rebasing on top of upstream Nim after commit 82c009a2cbc5d07ab9a847f1c58228a20efaf219 would be impossible. * Code review changes; Working test suite (without code reloading enabled) * - documentation - implemented the HCR test - almost works... - fix the issue on Unix where for executable targets the source file for the main module of a project in nimcache was being overwritten with the binary itself (and thus the actual source code was lost) - fixing embedded paths to shared objects on unix (the "lib" prefix was being prepended to the entire path instead of just the filename) - other fixes - removing unnecessary includes since that file is already included in chcks.nim which is in turn included in system.nim (and previously was getting imported in chcks.nim but then system.nim improts something... and that breaks HCR (perhaps it could be fixed but it would be nice not to import anything in system)) * fix for clang & C++ - explicitly casting a function pointer to void* more stable mangling of parameter names when HCR is on the length of the static arrays in the DatInit functions is now part of the name of the variables, so when they get resized they get also recreated more stable mangling for inline functions - no longer depends on the module which first used them work on the new complicated HCR test - turned surprisingly complex - WIP test now successfully passes even when re-running `koch test` (previously when the nimcache wasn't cold that lead to errors) better documentation calling setStackBottomWith for PreMain passes over the HcrInit/DatInit/Init calls of all modules are now in the proper order (first all of one type, then all of the next). Also typeinfo globals are registered (created) in a single pass before the DatInit pass (because of the way generic instantiations are handled) Fix the test suite execution on macOs fix for being able to query the program arguments when using HCR on posix! other fixes * Bugfix: Fix a compilation error in C++ mode when a function pointer is converted to a raw pointer * basic documentation for the new hot code reloading semantics * Add change log entry * Don't re-execute the top-level statements while reloading JS code * fix a number of tests broken in a recent bugfix * Review changes * Added {.executeOnReload.} pragma that indicates top-level statements that should be executed on each reload. To make this work, I've modified the way the `if (hcr_init_) {...}` guards are produced in the init code. This still needs more work as the new guards seem to be inserted within the previously generated guards. This change also removes the need for `lastRegistedGlobal` in nimhcr. * Implemented the `signatureHash` magic and the `hasModuleChanged` API depending on it (the actual logic is not imlemented yet). * Add the "hcr" prefix to all HCR-related symbols in the system module. Added a new `hotcodereloading` module exporting the high-level API to the user. Besides being more hygienic, this was also required in order to make it possible to use macros in the high-level API. Without the split, `system` would have to import `macros`, which was going to produce the well-known init problems. * Attempted to solve the "GC markers problem". Crashes were expected with the previous code, because the GC markers were compiled as normal procs are registered in the GC. When their module is unloaded, dangling pointers will remain in the GC tables. To solve this issue, I don't register any GC markers when HCR is on, but I add them to the HCR globals metadata and I use a single marker registed in nimhcr during the initialization of the system module that will be responsible for marking all globals. * fix a compilation error * - implemented the hasModuleChanged functionality - tuples can be returned and broken into different vars in global scope - added comments for the closnig scopes of the if statements in the init proc - the new executeOnReload pragma works now! - other fixes * finally! fixing this hack in a proper way - declaring the destructor out of line (out of the class body) - we no longer need to forward-declare popCurrentExceptionEx * Force full module parsing This is a temporary hack that breaks some tests. I'll investigate later how these can be fixed. * tuples are now properly handled when global! * these comments mess up the codegen in debug mode when $n is not actually a new line (or something like that) - these labels are intended only for GOTO labels anyway... * "solved" the issue with the .pdb locks on windows when a binary is being debugged and hot code reloading is used at the same time * fixes after rebasing... * small fixes for the test * better handling of globals! no more compiler crashes for locals with the global pragma, also simplified code around loops in global scope which have local vars (actually globals) * we can now use the global pragma even for ... globals! * the right output * lets try those boehm GC tests * after the test is ran it will be at its starting state - no git modifications * clarification in the docs * removed unnecessary line directives for forward declarations of functions - they were causing trouble with hot code reloading when no semantic change propagates to the main module but a line directive got changed and thus the main module had to be recompiled since the .c code had changed * fixed bug! was inserting duplicate keys into the table and later was removing only 1 copy of all the duplicates (after a few reloads) * no longer breaking into DatInit code when not supposed to * fixes after rebasing * yet more fixes after rebasing * Update jssys.nim * Rework the HCR path-handling logic After reviewing the code more carefully, I've noticed that the old logic will be broken when the user overrides the '--out:f' compiler option. Besides fixing this issues, I took the opportunity to implement the missing '--outdir:d' option. Other changes: * ./koch test won't overwrite any HCR and RTL builds located in nim/lib * HCR and RTL are compiled with --threads:on by default * Clean up the globals registration logic * Handle non-flattened top-level stmtlists in JS as well * The HCR is not supported with the Boehm GC yet Also fixes some typos and the expected output of the HCR integration test * The GC marker procs are now properly used as trampolines * Fix the HCR integration test in release builds * Fix ./koch tools * this forward declaration doesn't seem to be necessary, and in fact breaks HCR because a 2nd function pointer is emitted for this externed/rtl func * the forward declaration I removed in the last commit was actually necessary * Attempt to make all tests green * Fix tgenscript * BAT file for running the HCR integration test on Windows [skip ci] * Fix the docgen tests * A final fix for Travis (hopefully)
* helpers2 now has a real nameAraq2019-02-071-1/+1
|
* refs #9880 show index and bound in lots of `index out of bounds` errorsTimothee Cour2018-12-091-1/+8
|
* further steps in implementing sink parameters; refs #7041Araq2018-03-301-0/+5
|
* Implement language feature #6885 (#6954)cooldome2017-12-241-1/+0
|
* fixes #5599 (#5610)Andreas Rumpf2017-03-261-1/+5
|
* fix types of ansi_c/sysio to more closely match C ABIJacek Sieka2016-06-051-1/+0
| | | | | also fixes some instances of using C library functions when there are nim alternatives available
* Move the noreturn pragma to sysFataldef2015-05-051-3/+3
| | | | | Now you can choose to implement sysFatal with --os:standalone so that it returns.
* Get rid of more deprecation warnings by renaming exceptionsdef2015-01-271-2/+2
|
* system files use new identifiersAraq2014-08-231-7/+7
|
* the big renamefest: first stepsAraq2014-08-221-1/+1
|
* optimized method dispatchersAraq2014-06-121-0/+22
|
* case consistency: next stepsAraq2013-12-291-1/+1
|
* case consistency part 4Araq2013-12-271-1/+1
|
* --os:standalone works againAraq2013-06-301-0/+77
20 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275
#
#
#            Nim's Runtime Library
#        (c) Copyright 2019 Nim Contributors
#
#    See the file "copying.txt", included in this
#    distribution, for details about the copyright.
#

## This module implements a simple HTTP client that can be used to retrieve
## webpages and other data.
##
## Retrieving a website
## ====================
##
## This example uses HTTP GET to retrieve
## `http://google.com`:
##
## .. code-block:: Nim
##   import std/httpclient
##   var client = newHttpClient()
##   echo client.getContent("http://google.com")
##
## The same action can also be performed asynchronously, simply use the
## `AsyncHttpClient`:
##
## .. code-block:: Nim
##   import std/[asyncdispatch, httpclient]
##
##   proc asyncProc(): Future[string] {.async.} =
##     var client = newAsyncHttpClient()
##     return await client.getContent("http://example.com")
##
##   echo waitFor asyncProc()
##
## The functionality implemented by `HttpClient` and `AsyncHttpClient`
## is the same, so you can use whichever one suits you best in the examples
## shown here.
##
## **Note:** You need to run asynchronous examples in an async proc
## otherwise you will get an `Undeclared identifier: 'await'` error.
##
## Using HTTP POST
## ===============
##
## This example demonstrates the usage of the W3 HTML Validator, it
## uses `multipart/form-data` as the `Content-Type` to send the HTML to be
## validated to the server.
##
## .. code-block:: Nim
##   var client = newHttpClient()
##   var data = newMultipartData()
##   data["output"] = "soap12"
##   data["uploaded_file"] = ("test.html", "text/html",
##     "<html><head></head><body><p>test</p></body></html>")
##
##   echo client.postContent("http://validator.w3.org/check", multipart=data)
##
## To stream files from disk when performing the request, use `addFiles`.
##
## **Note:** This will allocate a new `Mimetypes` database every time you call
## it, you can pass your own via the `mimeDb` parameter to avoid this.
##
## .. code-block:: Nim
##   let mimes = newMimetypes()
##   var client = newHttpClient()
##   var data = newMultipartData()
##   data.addFiles({"uploaded_file": "test.html"}, mimeDb = mimes)
##
##   echo client.postContent("http://validator.w3.org/check", multipart=data)
##
## You can also make post requests with custom headers.
## This example sets `Content-Type` to `application/json`
## and uses a json object for the body
##
## .. code-block:: Nim
##   import std/[httpclient, json]
##
##   let client = newHttpClient()
##   client.headers = newHttpHeaders({ "Content-Type": "application/json" })
##   let body = %*{
##       "data": "some text"
##   }
##   let response = client.request("http://some.api", httpMethod = HttpPost, body = $body)
##   echo response.status
##
## Progress reporting
## ==================
##
## You may specify a callback procedure to be called during an HTTP request.
## This callback will be executed every second with information about the
## progress of the HTTP request.
##
## .. code-block:: Nim
##    import std/[asyncdispatch, httpclient]
##
##    proc onProgressChanged(total, progress, speed: BiggestInt) {.async.} =
##      echo("Downloaded ", progress, " of ", total)
##      echo("Current rate: ", speed div 1000, "kb/s")
##
##    proc asyncProc() {.async.} =
##      var client = newAsyncHttpClient()
##      client.onProgressChanged = onProgressChanged
##      discard await client.getContent("http://speedtest-ams2.digitalocean.com/100mb.test")
##
##    waitFor asyncProc()
##
## If you would like to remove the callback simply set it to `nil`.
##
## .. code-block:: Nim
##   client.onProgressChanged = nil
##
## .. warning:: The `total` reported by httpclient may be 0 in some cases.
##
##
## SSL/TLS support
## ===============
## This requires the OpenSSL library, fortunately it's widely used and installed
## on many operating systems. httpclient will use SSL automatically if you give
## any of the functions a url with the `https` schema, for example:
## `https://github.com/`.
##
## You will also have to compile with `ssl` defined like so:
## `nim c -d:ssl ...`.
##
## Certificate validation is NOT performed by default.
## This will change in the future.
##
## A set of directories and files from the `ssl_certs <ssl_certs.html>`_
## module are scanned to locate CA certificates.
##
## See `newContext <net.html#newContext.string,string,string,string>`_ to tweak or disable certificate validation.
##
## Timeouts
## ========
##
## Currently only the synchronous functions support a timeout.
## The timeout is
## measured in milliseconds, once it is set any call on a socket which may
## block will be susceptible to this timeout.
##
## It may be surprising but the
## function as a whole can take longer than the specified timeout, only
## individual internal calls on the socket are affected. In practice this means
## that as long as the server is sending data an exception will not be raised,
## if however data does not reach the client within the specified timeout a
## `TimeoutError` exception will be raised.
##
## Here is how to set a timeout when creating an `HttpClient` instance:
##
## .. code-block:: Nim
##    import std/httpclient
##
##    let client = newHttpClient(timeout = 42)
##
## Proxy
## =====
##
## A proxy can be specified as a param to any of the procedures defined in
## this module. To do this, use the `newProxy` constructor. Unfortunately,
## only basic authentication is supported at the moment.
##
## Some examples on how to configure a Proxy for `HttpClient`:
##
## .. code-block:: Nim
##    import std/httpclient
##
##    let myProxy = newProxy("http://myproxy.network")
##    let client = newHttpClient(proxy = myProxy)
##
## Get Proxy URL from environment variables:
##
## .. code-block:: Nim
##    import std/httpclient
##
##    var url = ""
##    try:
##      if existsEnv("http_proxy"):
##        url = getEnv("http_proxy")
##      elif existsEnv("https_proxy"):
##        url = getEnv("https_proxy")
##    except ValueError:
##      echo "Unable to parse proxy from environment variables."
##
##    let myProxy = newProxy(url = url)
##    let client = newHttpClient(proxy = myProxy)
##
## Redirects
## =========
##
## The maximum redirects can be set with the `maxRedirects` of `int` type,
## it specifies the maximum amount of redirects to follow,
## it defaults to `5`, you can set it to `0` to disable redirects.
##
## Here you can see an example about how to set the `maxRedirects` of `HttpClient`:
##
## .. code-block:: Nim
##    import std/httpclient
##
##    let client = newHttpClient(maxRedirects = 0)
##

import std/private/since

import std/[
  net, strutils, uri, parseutils, base64, os, mimetypes,
  math, random, httpcore, times, tables, streams, monotimes,
  asyncnet, asyncdispatch, asyncfile, nativesockets,
]

export httpcore except parseHeader # TODO: The `except` doesn't work

type
  Response* = ref object
    version*: string
    status*: string
    headers*: HttpHeaders
    body: string
    bodyStream*: Stream

  AsyncResponse* = ref object
    version*: string
    status*: string
    headers*: HttpHeaders
    body: string
    bodyStream*: FutureStream[string]

proc code*(response: Response | AsyncResponse): HttpCode
           {.raises: [ValueError, OverflowDefect].} =
  ## Retrieves the specified response's `HttpCode`.
  ##
  ## Raises a `ValueError` if the response's `status` does not have a
  ## corresponding `HttpCode`.
  return response.status[0 .. 2].parseInt.HttpCode

proc contentType*(response: Response | AsyncResponse): string {.inline.} =
  ## Retrieves the specified response's content type.
  ##
  ## This is effectively the value of the "Content-Type" header.
  response.headers.getOrDefault("content-type")

proc contentLength*(response: Response | AsyncResponse): int =
  ## Retrieves the specified response's content length.
  ##
  ## This is effectively the value of the "Content-Length" header.
  ##
  ## A `ValueError` exception will be raised if the value is not an integer.
  var contentLengthHeader = response.headers.getOrDefault("Content-Length")
  result = contentLengthHeader.parseInt()
  doAssert(result >= 0 and result <= high(int32))

proc lastModified*(response: Response | AsyncResponse): DateTime =
  ## Retrieves the specified response's last modified time.
  ##
  ## This is effectively the value of the "Last-Modified" header.
  ##
  ## Raises a `ValueError` if the parsing fails or the value is not a correctly
  ## formatted time.
  var lastModifiedHeader = response.headers.getOrDefault("last-modified")
  result = parse(lastModifiedHeader, "ddd, dd MMM yyyy HH:mm:ss 'GMT'", utc())

proc body*(response: Response): string =
  ## Retrieves the specified response's body.
  ##
  ## The response's body stream is read synchronously.
  if response.body.len == 0:
    response.body = response.bodyStream.readAll()
  return response.body

proc body*(response: AsyncResponse): Future[string] {.async.} =
  ## Reads the response's body and caches it. The read is performed only
  ## once.
  if response.body.len == 0:
    response.body = await readAll(response.bodyStream)
  return response.body

type
  Proxy* = ref object
    url*: Uri
    auth*: string

  MultipartEntry = object
    name, content: string
    case isFile: bool
    of true:
      filename, contentType: string
      fileSize: int64
      isStream: bool
    else: discard

  MultipartEntries* = openArray[tuple[name, content: string]]
  MultipartData* = ref object
    content: seq[MultipartEntry]

  ProtocolError* = object of IOError ## exception that is raised when server
                                     ## does not conform to the implemented
                                     ## protocol

  HttpRequestError* = object of IOError ## Thrown in the `getContent` proc
                                        ## and `postContent` proc,
                                        ## when the server returns an error

const defUserAgent* = "Nim httpclient/" & NimVersion

proc httpError(msg: string) =
  var e: ref ProtocolError
  new(e)
  e.msg = msg
  raise e

proc fileError(msg: string) =
  var e: ref IOError
  new(e)
  e.msg = msg
  raise e

when not defined(ssl):
  type SslContext = ref object
var defaultSslContext {.threadvar.}: SslContext

proc getDefaultSSL(): SslContext =
  result = defaultSslContext
  when defined(ssl):
    if result == nil:
      defaultSslContext = newContext(verifyMode = CVerifyPeer)
      result = defaultSslContext
      doAssert result != nil, "failure to initialize the SSL context"

proc newProxy*(url: string; auth = ""): Proxy =
  ## Constructs a new `TProxy` object.
  result = Proxy(url: parseUri(url), auth: auth)

proc newProxy*(url: Uri; auth = ""): Proxy =
  ## Constructs a new `TProxy` object.
  result = Proxy(url: url, auth: auth)

proc newMultipartData*: MultipartData {.inline.} =
  ## Constructs a new `MultipartData` object.
  MultipartData()

proc `$`*(data: MultipartData): string {.since: (1, 1).} =
  ## convert MultipartData to string so it's human readable when echo
  ## see https://github.com/nim-lang/Nim/issues/11863
  const sep = "-".repeat(30)
  for pos, entry in data.content:
    result.add(sep & center($pos, 3) & sep)
    result.add("\nname=\"" & entry.name & "\"")
    if entry.isFile:
      result.add("; filename=\"" & entry.filename & "\"\n")
      result.add("Content-Type: " & entry.contentType)
    result.add("\n\n" & entry.content & "\n")

proc add*(p: MultipartData, name, content: string, filename: string = "",
          contentType: string = "", useStream = true) =
  ## Add a value to the multipart data.
  ##
  ## When `useStream` is `false`, the file will be read into memory.
  ##
  ## Raises a `ValueError` exception if
  ## `name`, `filename` or `contentType` contain newline characters.
  if {'\c', '\L'} in name:
    raise newException(ValueError, "name contains a newline character")
  if {'\c', '\L'} in filename:
    raise newException(ValueError, "filename contains a newline character")
  if {'\c', '\L'} in contentType:
    raise newException(ValueError, "contentType contains a newline character")

  var entry = MultipartEntry(
    name: name,
    content: content,
    isFile: filename.len > 0
  )

  if entry.isFile:
    entry.isStream = useStream
    entry.filename = filename
    entry.contentType = contentType

  p.content.add(entry)

proc add*(p: MultipartData, xs: MultipartEntries): MultipartData
         {.discardable.} =
  ## Add a list of multipart entries to the multipart data `p`. All values are
  ## added without a filename and without a content type.
  ##
  ## .. code-block:: Nim
  ##   data.add({"action": "login", "format": "json"})
  for name, content in xs.items:
    p.add(name, content)
  result = p

proc newMultipartData*(xs: MultipartEntries): MultipartData =
  ## Create a new multipart data object and fill it with the entries `xs`
  ## directly.
  ##
  ## .. code-block:: Nim
  ##   var data = newMultipartData({"action": "login", "format": "json"})
  result = MultipartData()
  for entry in xs:
    result.add(entry.name, entry.content)

proc addFiles*(p: MultipartData, xs: openArray[tuple[name, file: string]],
               mimeDb = newMimetypes(), useStream = true):
               MultipartData {.discardable.} =
  ## Add files to a multipart data object. The files will be streamed from disk
  ## when the request is being made. When `stream` is `false`, the files are
  ## instead read into memory, but beware this is very memory ineffecient even
  ## for small files. The MIME types will automatically be determined.
  ## Raises an `IOError` if the file cannot be opened or reading fails. To
  ## manually specify file content, filename and MIME type, use `[]=` instead.
  ##
  ## .. code-block:: Nim
  ##   data.addFiles({"uploaded_file": "public/test.html"})
  for name, file in xs.items:
    var contentType: string
    let (_, fName, ext) = splitFile(file)
    if ext.len > 0:
      contentType = mimeDb.getMimetype(ext[1..ext.high], "")
    let content = if useStream: file else: readFile(file)
    p.add(name, content, fName & ext, contentType, useStream = useStream)
  result = p

proc `[]=`*(p: MultipartData, name, content: string) {.inline.} =
  ## Add a multipart entry to the multipart data `p`. The value is added
  ## without a filename and without a content type.
  ##
  ## .. code-block:: Nim
  ##   data["username"] = "NimUser"
  p.add(name, content)

proc `[]=`*(p: MultipartData, name: string,
            file: tuple[name, contentType, content: string]) {.inline.} =
  ## Add a file to the multipart data `p`, specifying filename, contentType
  ## and content manually.
  ##
  ## .. code-block:: Nim
  ##   data["uploaded_file"] = ("test.html", "text/html",
  ##     "<html><head></head><body><p>test</p></body></html>")
  p.add(name, file.content, file.name, file.contentType, useStream = false)

proc getBoundary(p: MultipartData): string =
  if p == nil or p.content.len == 0: return
  while true:
    result = $rand(int.high)
    for i, entry in p.content:
      if result in entry.content: break
      elif i == p.content.high: return

proc sendFile(socket: Socket | AsyncSocket,
              entry: MultipartEntry) {.multisync.} =
  const chunkSize = 2^18
  let file =
    when socket is AsyncSocket: openAsync(entry.content)
    else: newFileStream(entry.content, fmRead)

  var buffer: string
  while true:
    buffer =
      when socket is AsyncSocket: (await read(file, chunkSize))
      else: readStr(file, chunkSize)
    if buffer.len == 0: break
    await socket.send(buffer)
  file.close()

proc getNewLocation(lastURL: Uri, headers: HttpHeaders): Uri =
  let newLocation = headers.getOrDefault"Location"
  if newLocation == "": httpError("location header expected")
  # Relative URLs. (Not part of the spec, but soon will be.)
  let parsedLocation = parseUri(newLocation)
  if parsedLocation.hostname == "" and parsedLocation.path != "":
    result = lastURL
    result.path = parsedLocation.path
    result.query = parsedLocation.query
    result.anchor = parsedLocation.anchor
  else:
    result = parsedLocation

proc generateHeaders(requestUrl: Uri, httpMethod: HttpMethod, headers: HttpHeaders,
                     proxy: Proxy): string =
  # GET
  result = $httpMethod
  result.add ' '

  if proxy.isNil or requestUrl.scheme == "https":
    # /path?query
    if not requestUrl.path.startsWith("/"): result.add '/'
    result.add(requestUrl.path)
    if requestUrl.query.len > 0:
      result.add("?" & requestUrl.query)
  else:
    # Remove the 'http://' from the URL for CONNECT requests for TLS connections.
    var modifiedUrl = requestUrl
    if requestUrl.scheme == "https": modifiedUrl.scheme = ""
    result.add($modifiedUrl)

  # HTTP/1.1\c\l
  result.add(" HTTP/1.1" & httpNewLine)

  # Host header.
  if not headers.hasKey("Host"):
    if requestUrl.port == "":
      add(result, "Host: " & requestUrl.hostname & httpNewLine)
    else:
      add(result, "Host: " & requestUrl.hostname & ":" & requestUrl.port & httpNewLine)

  # Connection header.
  if not headers.hasKey("Connection"):
    add(result, "Connection: Keep-Alive" & httpNewLine)

  # Proxy auth header.
  if not proxy.isNil and proxy.auth != "":
    let auth = base64.encode(proxy.auth)
    add(result, "Proxy-Authorization: basic " & auth & httpNewLine)

  for key, val in headers:
    add(result, key & ": " & val & httpNewLine)

  add(result, httpNewLine)

type
  ProgressChangedProc*[ReturnType] =
    proc (total, progress, speed: BiggestInt):
      ReturnType {.closure, gcsafe.}

  HttpClientBase*[SocketType] = ref object
    socket: SocketType
    connected: bool
    currentURL: Uri       ## Where we are currently connected.
    headers*: HttpHeaders ## Headers to send in requests.
    maxRedirects: Natural ## Maximum redirects, set to `0` to disable.
    userAgent: string
    timeout*: int         ## Only used for blocking HttpClient for now.
    proxy: Proxy
    ## `nil` or the callback to call when request progress changes.
    when SocketType is Socket:
      onProgressChanged*: ProgressChangedProc[void]
    else:
      onProgressChanged*: ProgressChangedProc[Future[void]]
    when defined(ssl):
      sslContext: net.SslContext
    contentTotal: BiggestInt
    contentProgress: BiggestInt
    oneSecondProgress: BiggestInt
    lastProgressReport: MonoTime
    when SocketType is AsyncSocket:
      bodyStream: FutureStream[string]
      parseBodyFut: Future[void]
    else:
      bodyStream: Stream
    getBody: bool         ## When `false`, the body is never read in requestAux.

type
  HttpClient* = HttpClientBase[Socket]

proc newHttpClient*(userAgent = defUserAgent, maxRedirects = 5,
                    sslContext = getDefaultSSL(), proxy: Proxy = nil,
                    timeout = -1, headers = newHttpHeaders()): HttpClient =
  ## Creates a new HttpClient instance.
  ##
  ## `userAgent` specifies the user agent that will be used when making
  ## requests.
  ##
  ## `maxRedirects` specifies the maximum amount of redirects to follow,
  ## default is 5.
  ##
  ## `sslContext` specifies the SSL context to use for HTTPS requests.
  ## See `SSL/TLS support <#sslslashtls-support>`_
  ##
  ## `proxy` specifies an HTTP proxy to use for this HTTP client's
  ## connections.
  ##
  ## `timeout` specifies the number of milliseconds to allow before a
  ## `TimeoutError` is raised.
  ##
  ## `headers` specifies the HTTP Headers.
  runnableExamples:
    import std/strutils

    let exampleHtml = newHttpClient().getContent("http://example.com")
    assert "Example Domain" in exampleHtml
    assert "Pizza" notin exampleHtml

  new result
  result.headers = headers
  result.userAgent = userAgent
  result.maxRedirects = maxRedirects
  result.proxy = proxy
  result.timeout = timeout
  result.onProgressChanged = nil
  result.bodyStream = newStringStream()
  result.getBody = true
  when defined(ssl):
    result.sslContext = sslContext

type
  AsyncHttpClient* = HttpClientBase[AsyncSocket]

proc newAsyncHttpClient*(userAgent = defUserAgent, maxRedirects = 5,
                         sslContext = getDefaultSSL(), proxy: Proxy = nil,
                         headers = newHttpHeaders()): AsyncHttpClient =
  ## Creates a new AsyncHttpClient instance.
  ##
  ## `userAgent` specifies the user agent that will be used when making
  ## requests.
  ##
  ## `maxRedirects` specifies the maximum amount of redirects to follow,
  ## default is 5.
  ##
  ## `sslContext` specifies the SSL context to use for HTTPS requests.
  ##
  ## `proxy` specifies an HTTP proxy to use for this HTTP client's
  ## connections.
  ##
  ## `headers` specifies the HTTP Headers.
  runnableExamples:
    import std/[asyncdispatch, strutils]

    proc asyncProc(): Future[string] {.async.} =
      let client = newAsyncHttpClient()
      result = await client.getContent("http://example.com")

    let exampleHtml = waitFor asyncProc()
    assert "Example Domain" in exampleHtml
    assert "Pizza" notin exampleHtml
  
  new result
  result.headers = headers
  result.userAgent = userAgent
  result.maxRedirects = maxRedirects
  result.proxy = proxy
  result.timeout = -1 # TODO
  result.onProgressChanged = nil
  result.bodyStream = newFutureStream[string]("newAsyncHttpClient")
  result.getBody = true
  when defined(ssl):
    result.sslContext = sslContext

proc close*(client: HttpClient | AsyncHttpClient) =
  ## Closes any connections held by the HTTP client.
  if client.connected:
    client.socket.close()
    client.connected = false

proc getSocket*(client: HttpClient): Socket {.inline.} =
  ## Get network socket, useful if you want to find out more details about the connection
  ##
  ## this example shows info about local and remote endpoints
  ##
  ## .. code-block:: Nim
  ##   if client.connected:
  ##     echo client.getSocket.getLocalAddr
  ##     echo client.getSocket.getPeerAddr
  ##
  return client.socket

proc getSocket*(client: AsyncHttpClient): AsyncSocket {.inline.} =
  return client.socket

proc reportProgress(client: HttpClient | AsyncHttpClient,
                    progress: BiggestInt) {.multisync.} =
  client.contentProgress += progress
  client.oneSecondProgress += progress
  if (getMonoTime() - client.lastProgressReport).inSeconds > 1:
    if not client.onProgressChanged.isNil:
      await client.onProgressChanged(client.contentTotal,
                                     client.contentProgress,
                                     client.oneSecondProgress)
      client.oneSecondProgress = 0
      client.lastProgressReport = getMonoTime()

proc recvFull(client: HttpClient | AsyncHttpClient, size: int, timeout: int,
              keep: bool): Future[int] {.multisync.} =
  ## Ensures that all the data requested is read and returned.
  var readLen = 0
  while true:
    if size == readLen: break

    let remainingSize = size - readLen
    let sizeToRecv = min(remainingSize, net.BufferSize)

    when client.socket is Socket:
      let data = client.socket.recv(sizeToRecv, timeout)
    else:
      let data = await client.socket.recv(sizeToRecv)
    if data == "":
      client.close()
      break # We've been disconnected.

    readLen.inc(data.len)
    if keep:
      await client.bodyStream.write(data)

    await reportProgress(client, data.len)

  return readLen

proc parseChunks(client: HttpClient | AsyncHttpClient): Future[void]
                 {.multisync.} =
  while true:
    var chunkSize = 0
    var chunkSizeStr = await client.socket.recvLine()
    var i = 0
    if chunkSizeStr == "":
      httpError("Server terminated connection prematurely")
    while i < chunkSizeStr.len:
      case chunkSizeStr[i]
      of '0'..'9':
        chunkSize = chunkSize shl 4 or (ord(chunkSizeStr[i]) - ord('0'))
      of 'a'..'f':
        chunkSize = chunkSize shl 4 or (ord(chunkSizeStr[i]) - ord('a') + 10)
      of 'A'..'F':
        chunkSize = chunkSize shl 4 or (ord(chunkSizeStr[i]) - ord('A') + 10)
      of ';':
        # http://tools.ietf.org/html/rfc2616#section-3.6.1
        # We don't care about chunk-extensions.
        break
      else:
        httpError("Invalid chunk size: " & chunkSizeStr)
      inc(i)
    if chunkSize <= 0:
      discard await recvFull(client, 2, client.timeout, false) # Skip \c\L
      break
    var bytesRead = await recvFull(client, chunkSize, client.timeout, true)
    if bytesRead != chunkSize:
      httpError("Server terminated connection prematurely")

    bytesRead = await recvFull(client, 2, client.timeout, false) # Skip \c\L
    if bytesRead != 2:
      httpError("Server terminated connection prematurely")

    # Trailer headers will only be sent if the request specifies that we want
    # them: http://tools.ietf.org/html/rfc2616#section-3.6.1

proc parseBody(client: HttpClient | AsyncHttpClient, headers: HttpHeaders,
               httpVersion: string): Future[void] {.multisync.} =
  # Reset progress from previous requests.
  client.contentTotal = 0
  client.contentProgress = 0
  client.oneSecondProgress = 0
  client.lastProgressReport = MonoTime()

  when client is AsyncHttpClient:
    assert(not client.bodyStream.finished)

  if headers.getOrDefault"Transfer-Encoding" == "chunked":
    await parseChunks(client)
  else:
    # -REGION- Content-Length
    # (http://tools.ietf.org/html/rfc2616#section-4.4) NR.3
    var contentLengthHeader = headers.getOrDefault"Content-Length"
    if contentLengthHeader != "":
      var length = contentLengthHeader.parseInt()
      client.contentTotal = length
      if length > 0:
        let recvLen = await client.recvFull(length, client.timeout, true)
        if recvLen == 0:
          client.close()
          httpError("Got disconnected while trying to read body.")
        if recvLen != length:
          httpError("Received length doesn't match expected length. Wanted " &
                    $length & " got: " & $recvLen)
    else:
      # (http://tools.ietf.org/html/rfc2616#section-4.4) NR.4 TODO

      # -REGION- Connection: Close
      # (http://tools.ietf.org/html/rfc2616#section-4.4) NR.5
      let implicitConnectionClose =
        httpVersion == "1.0" or
        # This doesn't match the HTTP spec, but it fixes issues for non-conforming servers.
        (httpVersion == "1.1" and headers.getOrDefault"Connection" == "")
      if headers.getOrDefault"Connection" == "close" or implicitConnectionClose:
        while true:
          let recvLen = await client.recvFull(4000, client.timeout, true)
          if recvLen != 4000:
            client.close()
            break

  when client is AsyncHttpClient:
    client.bodyStream.complete()
  else:
    client.bodyStream.setPosition(0)

  # If the server will close our connection, then no matter the method of
  # reading the body, we need to close our socket.
  if headers.getOrDefault"Connection" == "close":
    client.close()

proc parseResponse(client: HttpClient | AsyncHttpClient,
                   getBody: bool): Future[Response | AsyncResponse]
                   {.multisync.} =
  new result
  var parsedStatus = false
  var linei = 0
  var fullyRead = false
  var line = ""
  result.headers = newHttpHeaders()
  while true:
    linei = 0
    when client is HttpClient:
      line = await client.socket.recvLine(client.timeout)
    else:
      line = await client.socket.recvLine()
    if line == "":
      # We've been disconnected.
      client.close()
      break
    if line == httpNewLine:
      fullyRead = true
      break
    if not parsedStatus:
      # Parse HTTP version info and status code.
      var le = skipIgnoreCase(line, "HTTP/", linei)
      if le <= 0:
        httpError("invalid http version, `" & line & "`")
      inc(linei, le)
      le = skipIgnoreCase(line, "1.1", linei)
      if le > 0: result.version = "1.1"
      else:
        le = skipIgnoreCase(line, "1.0", linei)
        if le <= 0: httpError("unsupported http version")
        result.version = "1.0"
      inc(linei, le)
      # Status code
      linei.inc skipWhitespace(line, linei)
      result.status = line[linei .. ^1]
      parsedStatus = true
    else:
      # Parse headers
      var name = ""
      var le = parseUntil(line, name, ':', linei)
      if le <= 0: httpError("invalid headers")
      inc(linei, le)
      if line[linei] != ':': httpError("invalid headers")
      inc(linei) # Skip :

      result.headers.add(name, line[linei .. ^1].strip())
      if result.headers.len > headerLimit:
        httpError("too many headers")

  if not fullyRead:
    httpError("Connection was closed before full request has been made")

  when client is HttpClient:
    result.bodyStream = newStringStream()
  else:
    result.bodyStream = newFutureStream[string]("parseResponse")

  if getBody and result.code != Http204:
    client.bodyStream = result.bodyStream
    when client is HttpClient:
      parseBody(client, result.headers, result.version)
    else:
      assert(client.parseBodyFut.isNil or client.parseBodyFut.finished)
      # do not wait here for the body request to complete
      client.parseBodyFut = parseBody(client, result.headers, result.version)
      client.parseBodyFut.addCallback do():
        if client.parseBodyFut.failed:
          client.bodyStream.fail(client.parseBodyFut.error)

proc newConnection(client: HttpClient | AsyncHttpClient,
                   url: Uri) {.multisync.} =
  if client.currentURL.hostname != url.hostname or
      client.currentURL.scheme != url.scheme or
      client.currentURL.port != url.port or
      (not client.connected):
    # Connect to proxy if specified
    let connectionUrl =
      if client.proxy.isNil: url else: client.proxy.url

    let isSsl = connectionUrl.scheme.toLowerAscii() == "https"

    if isSsl and not defined(ssl):
      raise newException(HttpRequestError,
        "SSL support is not available. Cannot connect over SSL. Compile with -d:ssl to enable.")

    if client.connected:
      client.close()
      client.connected = false

    # TODO: I should be able to write 'net.Port' here...
    let port =
      if connectionUrl.port == "":
        if isSsl:
          nativesockets.Port(443)
        else:
          nativesockets.Port(80)
      else: nativesockets.Port(connectionUrl.port.parseInt)

    when client is HttpClient:
      client.socket = await net.dial(connectionUrl.hostname, port)
    elif client is AsyncHttpClient:
      client.socket = await asyncnet.dial(connectionUrl.hostname, port)
    else: {.fatal: "Unsupported client type".}

    when defined(ssl):
      if isSsl:
        try:
          client.sslContext.wrapConnectedSocket(
            client.socket, handshakeAsClient, connectionUrl.hostname)
        except:
          client.socket.close()
          raise getCurrentException()

    # If need to CONNECT through proxy
    if url.scheme == "https" and not client.proxy.isNil:
      when defined(ssl):
        # Pass only host:port for CONNECT
        var connectUrl = initUri()
        connectUrl.hostname = url.hostname
        connectUrl.port = if url.port != "": url.port else: "443"

        let proxyHeaderString = generateHeaders(connectUrl, HttpConnect,
            newHttpHeaders(), client.proxy)
        await client.socket.send(proxyHeaderString)
        let proxyResp = await parseResponse(client, false)

        if not proxyResp.status.startsWith("200"):
          raise newException(HttpRequestError,
                            "The proxy server rejected a CONNECT request, " &
                            "so a secure connection could not be established.")
        client.sslContext.wrapConnectedSocket(
          client.socket, handshakeAsClient, url.hostname)
      else:
        raise newException(HttpRequestError,
        "SSL support is not available. Cannot connect over SSL. Compile with -d:ssl to enable.")

    # May be connected through proxy but remember actual URL being accessed
    client.currentURL = url
    client.connected = true

proc readFileSizes(client: HttpClient | AsyncHttpClient,
                   multipart: MultipartData) {.multisync.} =
  for entry in multipart.content.mitems():
    if not entry.isFile: continue
    if not entry.isStream:
      entry.fileSize = entry.content.len
      continue

    # TODO: look into making getFileSize work with async
    let fileSize = getFileSize(entry.content)
    entry.fileSize = fileSize

proc format(entry: MultipartEntry, boundary: string): string =
  result = "--" & boundary & httpNewLine
  result.add("Content-Disposition: form-data; name=\"" & entry.name & "\"")
  if entry.isFile:
    result.add("; filename=\"" & entry.filename & "\"" & httpNewLine)
    result.add("Content-Type: " & entry.contentType & httpNewLine)
  else:
    result.add(httpNewLine & httpNewLine & entry.content)

proc format(client: HttpClient | AsyncHttpClient,
            multipart: MultipartData): Future[seq[string]] {.multisync.} =
  let bound = getBoundary(multipart)
  client.headers["Content-Type"] = "multipart/form-data; boundary=" & bound

  await client.readFileSizes(multipart)

  var length: int64
  for entry in multipart.content:
    result.add(format(entry, bound) & httpNewLine)
    if entry.isFile:
      length += entry.fileSize + httpNewLine.len

  result.add "--" & bound & "--"

  for s in result: length += s.len
  client.headers["Content-Length"] = $length

proc override(fallback, override: HttpHeaders): HttpHeaders =
  # Right-biased map union for `HttpHeaders`

  result = newHttpHeaders()
  # Copy by value
  result.table[] = fallback.table[]

  if override.isNil:
    # Return the copy of fallback so it does not get modified
    return result

  for k, vs in override.table:
    result[k] = vs

proc requestAux(client: HttpClient | AsyncHttpClient, url: Uri,
                httpMethod: HttpMethod, body = "", headers: HttpHeaders = nil,
                multipart: MultipartData = nil): Future[Response | AsyncResponse]
                {.multisync.} =
  # Helper that actually makes the request. Does not handle redirects.
  if url.scheme == "":
    raise newException(ValueError, "No uri scheme supplied.")

  when client is AsyncHttpClient:
    if not client.parseBodyFut.isNil:
      # let the current operation finish before making another request
      await client.parseBodyFut
      client.parseBodyFut = nil

  await newConnection(client, url)

  let newHeaders = client.headers.override(headers)

  var data: seq[string]
  if multipart != nil and multipart.content.len > 0:
    data = await client.format(multipart)
  else:
    # Only change headers if they have not been specified already
    if not newHeaders.hasKey("Content-Length"):
      if body.len != 0:
        newHeaders["Content-Length"] = $body.len
      elif httpMethod notin {HttpGet, HttpHead}:
        newHeaders["Content-Length"] = "0"

  if not newHeaders.hasKey("user-agent") and client.userAgent.len > 0:
    newHeaders["User-Agent"] = client.userAgent

  let headerString = generateHeaders(url, httpMethod, newHeaders,
                                     client.proxy)
  await client.socket.send(headerString)

  if data.len > 0:
    var buffer: string
    for i, entry in multipart.content:
      buffer.add data[i]
      if not entry.isFile: continue
      if buffer.len > 0:
        await client.socket.send(buffer)
        buffer.setLen(0)
      if entry.isStream:
        await client.socket.sendFile(entry)
      else:
        await client.socket.send(entry.content)
      buffer.add httpNewLine
    # send the rest and the last boundary
    await client.socket.send(buffer & data[^1])
  elif body.len > 0:
    await client.socket.send(body)

  let getBody = httpMethod notin {HttpHead, HttpConnect} and
                client.getBody
  result = await parseResponse(client, getBody)

proc request*(client: HttpClient | AsyncHttpClient, url: Uri | string,
              httpMethod: HttpMethod | string = HttpGet, body = "",
              headers: HttpHeaders = nil,
              multipart: MultipartData = nil): Future[Response | AsyncResponse]
              {.multisync.} =
  ## Connects to the hostname specified by the URL and performs a request
  ## using the custom method string specified by `httpMethod`.
  ##
  ## Connection will be kept alive. Further requests on the same `client` to
  ## the same hostname will not require a new connection to be made. The
  ## connection can be closed by using the `close` procedure.
  ##
  ## This procedure will follow redirects up to a maximum number of redirects
  ## specified in `client.maxRedirects`.
  ##
  ## You need to make sure that the `url` doesn't contain any newline
  ## characters. Failing to do so will raise `AssertionDefect`.
  ##
  ## `headers` are HTTP headers that override the `client.headers` for
  ## this specific request only and will not be persisted.
  ##
  ## **Deprecated since v1.5**: use HttpMethod enum instead; string parameter httpMethod is deprecated
  when url is string:
    doAssert(not url.contains({'\c', '\L'}), "url shouldn't contain any newline characters")
    let url = parseUri(url)

  when httpMethod is string:
    {.warning:
       "Deprecated since v1.5; use HttpMethod enum instead; string parameter httpMethod is deprecated".}
    let httpMethod = case httpMethod
      of "HEAD":
        HttpHead
      of "GET":
        HttpGet
      of "POST":
        HttpPost
      of "PUT":
        HttpPut
      of "DELETE":
        HttpDelete
      of "TRACE":
        HttpTrace
      of "OPTIONS":
        HttpOptions
      of "CONNECT":
        HttpConnect
      of "PATCH":
        HttpPatch
      else:
        raise newException(ValueError, "Invalid HTTP method name: " & httpMethod)

  result = await client.requestAux(url, httpMethod, body, headers, multipart)

  var lastURL = url
  for i in 1..client.maxRedirects:
    let statusCode = result.code

    if statusCode notin {Http301, Http302, Http303, Http307, Http308}:
      break

    let redirectTo = getNewLocation(lastURL, result.headers)
    var redirectMethod: HttpMethod
    var redirectBody: string
    # For more informations about the redirect methods see:
    # https://developer.mozilla.org/en-US/docs/Web/HTTP/Redirections
    case statusCode
    of Http301, Http302, Http303:
      # The method is changed to GET unless it is GET or HEAD (RFC2616)
      if httpMethod notin {HttpGet, HttpHead}:
        redirectMethod = HttpGet
      else:
        redirectMethod = httpMethod
      # The body is stripped away
      redirectBody = ""
      # Delete any header value associated with the body
      if not headers.isNil():
        headers.del("Content-Length")
        headers.del("Content-Type")
        headers.del("Transfer-Encoding")
    of Http307, Http308:
      # The method and the body are unchanged
      redirectMethod = httpMethod
      redirectBody = body
    else:
      # Unreachable
      doAssert(false)

    # Check if the redirection is to the same domain or a sub-domain (foo.com
    # -> sub.foo.com)
    if redirectTo.hostname != lastURL.hostname and
      not redirectTo.hostname.endsWith("." & lastURL.hostname):
      # Perform some cleanup of the header values
      if headers != nil:
        # Delete the Host header
        headers.del("Host")
        # Do not send any sensitive info to a unknown host
        headers.del("Authorization")

    result = await client.requestAux(redirectTo, redirectMethod, redirectBody,
                                     headers, multipart)
    lastURL = redirectTo

proc responseContent(resp: Response | AsyncResponse): Future[string] {.multisync.} =
  ## Returns the content of a response as a string.
  ##
  ## A `HttpRequestError` will be raised if the server responds with a
  ## client error (status code 4xx) or a server error (status code 5xx).
  if resp.code.is4xx or resp.code.is5xx:
    raise newException(HttpRequestError, resp.status)
  else:
    return await resp.bodyStream.readAll()

proc head*(client: HttpClient | AsyncHttpClient,
          url: Uri | string): Future[Response | AsyncResponse] {.multisync.} =
  ## Connects to the hostname specified by the URL and performs a HEAD request.
  ##
  ## This procedure uses httpClient values such as `client.maxRedirects`.
  result = await client.request(url, HttpHead)

proc get*(client: HttpClient | AsyncHttpClient,
          url: Uri | string): Future[Response | AsyncResponse] {.multisync.} =
  ## Connects to the hostname specified by the URL and performs a GET request.
  ##
  ## This procedure uses httpClient values such as `client.maxRedirects`.
  result = await client.request(url, HttpGet)

proc getContent*(client: HttpClient | AsyncHttpClient,
                 url: Uri | string): Future[string] {.multisync.} =
  ## Connects to the hostname specified by the URL and returns the content of a GET request.
  let resp = await get(client, url)
  return await responseContent(resp)

proc delete*(client: HttpClient | AsyncHttpClient,
             url: Uri | string): Future[Response | AsyncResponse] {.multisync.} =
  ## Connects to the hostname specified by the URL and performs a DELETE request.
  ## This procedure uses httpClient values such as `client.maxRedirects`.
  result = await client.request(url, HttpDelete)

proc deleteContent*(client: HttpClient | AsyncHttpClient,
                    url: Uri | string): Future[string] {.multisync.} =
  ## Connects to the hostname specified by the URL and returns the content of a DELETE request.
  let resp = await delete(client, url)
  return await responseContent(resp)

proc post*(client: HttpClient | AsyncHttpClient, url: Uri | string, body = "",
           multipart: MultipartData = nil): Future[Response | AsyncResponse]
           {.multisync.} =
  ## Connects to the hostname specified by the URL and performs a POST request.
  ## This procedure uses httpClient values such as `client.maxRedirects`.
  result = await client.request(url, HttpPost, body, multipart=multipart)

proc postContent*(client: HttpClient | AsyncHttpClient, url: Uri | string, body = "",
                  multipart: MultipartData = nil): Future[string]
                  {.multisync.} =
  ## Connects to the hostname specified by the URL and returns the content of a POST request.
  let resp = await post(client, url, body, multipart)
  return await responseContent(resp)

proc put*(client: HttpClient | AsyncHttpClient, url: Uri | string, body = "",
          multipart: MultipartData = nil): Future[Response | AsyncResponse]
          {.multisync.} =
  ## Connects to the hostname specified by the URL and performs a PUT request.
  ## This procedure uses httpClient values such as `client.maxRedirects`.
  result = await client.request(url, HttpPut, body, multipart=multipart)

proc putContent*(client: HttpClient | AsyncHttpClient, url: Uri | string, body = "",
                 multipart: MultipartData = nil): Future[string] {.multisync.} =
  ## Connects to the hostname specified by the URL andreturns the content of a PUT request.
  let resp = await put(client, url, body, multipart)
  return await responseContent(resp)

proc patch*(client: HttpClient | AsyncHttpClient, url: Uri | string, body = "",
            multipart: MultipartData = nil): Future[Response | AsyncResponse]
            {.multisync.} =
  ## Connects to the hostname specified by the URL and performs a PATCH request.
  ## This procedure uses httpClient values such as `client.maxRedirects`.
  result = await client.request(url, HttpPatch, body, multipart=multipart)

proc patchContent*(client: HttpClient | AsyncHttpClient, url: Uri | string, body = "",
                   multipart: MultipartData = nil): Future[string]
                  {.multisync.} =
  ## Connects to the hostname specified by the URL and returns the content of a PATCH request.
  let resp = await patch(client, url, body, multipart)
  return await responseContent(resp)

proc downloadFile*(client: HttpClient, url: Uri | string, filename: string) =
  ## Downloads `url` and saves it to `filename`.
  client.getBody = false
  defer:
    client.getBody = true
  let resp = client.get(url)
  
  if resp.code.is4xx or resp.code.is5xx:
    raise newException(HttpRequestError, resp.status)

  client.bodyStream = newFileStream(filename, fmWrite)
  if client.bodyStream.isNil:
    fileError("Unable to open file")
  parseBody(client, resp.headers, resp.version)
  client.bodyStream.close()

proc downloadFileEx(client: AsyncHttpClient,
                    url: Uri | string, filename: string): Future[void] {.async.} =
  ## Downloads `url` and saves it to `filename`.
  client.getBody = false
  let resp = await client.get(url)
  
  if resp.code.is4xx or resp.code.is5xx:
    raise newException(HttpRequestError, resp.status)

  client.bodyStream = newFutureStream[string]("downloadFile")
  var file = openAsync(filename, fmWrite)
  defer: file.close()
  # Let `parseBody` write response data into client.bodyStream in the
  # background.
  let parseBodyFut = parseBody(client, resp.headers, resp.version)
  parseBodyFut.addCallback do():
    if parseBodyFut.failed:
      client.bodyStream.fail(parseBodyFut.error)
  # The `writeFromStream` proc will complete once all the data in the
  # `bodyStream` has been written to the file.
  await file.writeFromStream(client.bodyStream)

proc downloadFile*(client: AsyncHttpClient, url: Uri | string,
                   filename: string): Future[void] =
  result = newFuture[void]("downloadFile")
  try:
    result = downloadFileEx(client, url, filename)
  except Exception as exc:
    result.fail(exc)
  finally:
    result.addCallback(
      proc () = client.getBody = true
    )